Test Report: KVM_Linux_crio 20321

                    
                      2564366430c28bc1e44cd7de7532514f5935ec82:2025-01-27:38096
                    
                

Test fail (17/308)

x
+
TestAddons/parallel/Ingress (491.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-097644 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-097644 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-097644 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e1bbc3eb-e3d8-4361-986a-7836ef9e6bac] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:250: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:250: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-097644 -n addons-097644
addons_test.go:250: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-01-27 14:18:14.493575552 +0000 UTC m=+766.166369703
addons_test.go:250: (dbg) Run:  kubectl --context addons-097644 describe po nginx -n default
addons_test.go:250: (dbg) kubectl --context addons-097644 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-097644/192.168.39.228
Start Time:       Mon, 27 Jan 2025 14:10:14 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.29
IPs:
IP:  10.244.0.29
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hck28 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hck28:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  8m                   default-scheduler  Successfully assigned default/nginx to addons-097644
Normal   Pulling    2m8s (x4 over 8m)    kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     74s (x4 over 6m45s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     74s (x4 over 6m45s)  kubelet            Error: ErrImagePull
Normal   BackOff    5s (x10 over 6m45s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     5s (x10 over 6m45s)  kubelet            Error: ImagePullBackOff
addons_test.go:250: (dbg) Run:  kubectl --context addons-097644 logs nginx -n default
addons_test.go:250: (dbg) Non-zero exit: kubectl --context addons-097644 logs nginx -n default: exit status 1 (74.693746ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:250: kubectl --context addons-097644 logs nginx -n default: exit status 1
addons_test.go:251: failed waiting for ngnix pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-097644 -n addons-097644
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-097644 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-097644 logs -n 25: (1.344399441s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-671066              | download-only-671066 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| start   | -o=json --download-only              | download-only-223205 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | -p download-only-223205              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| delete  | -p download-only-223205              | download-only-223205 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| delete  | -p download-only-671066              | download-only-671066 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| delete  | -p download-only-223205              | download-only-223205 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| start   | --download-only -p                   | binary-mirror-105715 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | binary-mirror-105715                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46267               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-105715              | binary-mirror-105715 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| addons  | enable dashboard -p                  | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | addons-097644                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | addons-097644                        |                      |         |         |                     |                     |
	| start   | -p addons-097644 --wait=true         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:09 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	| addons  | addons-097644 addons disable         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:09 UTC |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-097644 addons disable         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:09 UTC |
	|         | gcp-auth --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:09 UTC |
	|         | -p addons-097644                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-097644 addons                 | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:09 UTC |
	|         | disable nvidia-device-plugin         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-097644 addons                 | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:09 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-097644 addons                 | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:09 UTC |
	|         | disable cloud-spanner                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-097644 addons disable         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:10 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-097644 addons                 | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:10 UTC | 27 Jan 25 14:10 UTC |
	|         | disable inspektor-gadget             |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| ip      | addons-097644 ip                     | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:10 UTC | 27 Jan 25 14:10 UTC |
	| addons  | addons-097644 addons disable         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:10 UTC | 27 Jan 25 14:10 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-097644 addons disable         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:10 UTC | 27 Jan 25 14:10 UTC |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| addons  | addons-097644 addons disable         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:14 UTC | 27 Jan 25 14:16 UTC |
	|         | storage-provisioner-rancher          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-097644 addons                 | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-097644 addons                 | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:16 UTC |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:05:43
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:05:43.780693 1013451 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:05:43.780813 1013451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:05:43.780825 1013451 out.go:358] Setting ErrFile to fd 2...
	I0127 14:05:43.780832 1013451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:05:43.781030 1013451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 14:05:43.781664 1013451 out.go:352] Setting JSON to false
	I0127 14:05:43.782666 1013451 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":17291,"bootTime":1737969453,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:05:43.782784 1013451 start.go:139] virtualization: kvm guest
	I0127 14:05:43.784893 1013451 out.go:177] * [addons-097644] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:05:43.787056 1013451 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 14:05:43.787061 1013451 notify.go:220] Checking for updates...
	I0127 14:05:43.789034 1013451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:05:43.790539 1013451 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 14:05:43.791834 1013451 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 14:05:43.792947 1013451 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:05:43.794209 1013451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:05:43.795600 1013451 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:05:43.828945 1013451 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 14:05:43.830536 1013451 start.go:297] selected driver: kvm2
	I0127 14:05:43.830549 1013451 start.go:901] validating driver "kvm2" against <nil>
	I0127 14:05:43.830562 1013451 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:05:43.831266 1013451 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:05:43.831371 1013451 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:05:43.846805 1013451 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:05:43.846858 1013451 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 14:05:43.847096 1013451 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:05:43.847130 1013451 cni.go:84] Creating CNI manager for ""
	I0127 14:05:43.847177 1013451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:05:43.847185 1013451 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 14:05:43.847240 1013451 start.go:340] cluster config:
	{Name:addons-097644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-097644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0127 14:05:43.847356 1013451 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:05:43.849197 1013451 out.go:177] * Starting "addons-097644" primary control-plane node in "addons-097644" cluster
	I0127 14:05:43.850425 1013451 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:05:43.850456 1013451 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 14:05:43.850465 1013451 cache.go:56] Caching tarball of preloaded images
	I0127 14:05:43.850551 1013451 preload.go:172] Found /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 14:05:43.850561 1013451 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 14:05:43.850859 1013451 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/config.json ...
	I0127 14:05:43.850881 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/config.json: {Name:mkf76d9208747a70ff9df6e74ebaa16aff66d9f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:05:43.851032 1013451 start.go:360] acquireMachinesLock for addons-097644: {Name:mk884e36253ca066a698970989f20649e5f9cbef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:05:43.851095 1013451 start.go:364] duration metric: took 44.724µs to acquireMachinesLock for "addons-097644"
	I0127 14:05:43.851120 1013451 start.go:93] Provisioning new machine with config: &{Name:addons-097644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-097644 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:05:43.851186 1013451 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 14:05:43.852924 1013451 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0127 14:05:43.853096 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:05:43.853162 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:05:43.867886 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42303
	I0127 14:05:43.868410 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:05:43.868979 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:05:43.869040 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:05:43.869524 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:05:43.869744 1013451 main.go:141] libmachine: (addons-097644) Calling .GetMachineName
	I0127 14:05:43.869931 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:05:43.870113 1013451 start.go:159] libmachine.API.Create for "addons-097644" (driver="kvm2")
	I0127 14:05:43.870140 1013451 client.go:168] LocalClient.Create starting
	I0127 14:05:43.870192 1013451 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem
	I0127 14:05:43.971967 1013451 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem
	I0127 14:05:44.102745 1013451 main.go:141] libmachine: Running pre-create checks...
	I0127 14:05:44.102770 1013451 main.go:141] libmachine: (addons-097644) Calling .PreCreateCheck
	I0127 14:05:44.103352 1013451 main.go:141] libmachine: (addons-097644) Calling .GetConfigRaw
	I0127 14:05:44.103882 1013451 main.go:141] libmachine: Creating machine...
	I0127 14:05:44.103898 1013451 main.go:141] libmachine: (addons-097644) Calling .Create
	I0127 14:05:44.104114 1013451 main.go:141] libmachine: (addons-097644) creating KVM machine...
	I0127 14:05:44.104136 1013451 main.go:141] libmachine: (addons-097644) creating network...
	I0127 14:05:44.105430 1013451 main.go:141] libmachine: (addons-097644) DBG | found existing default KVM network
	I0127 14:05:44.106433 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:44.106217 1013473 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123ba0}
	I0127 14:05:44.106460 1013451 main.go:141] libmachine: (addons-097644) DBG | created network xml: 
	I0127 14:05:44.106474 1013451 main.go:141] libmachine: (addons-097644) DBG | <network>
	I0127 14:05:44.106506 1013451 main.go:141] libmachine: (addons-097644) DBG |   <name>mk-addons-097644</name>
	I0127 14:05:44.106520 1013451 main.go:141] libmachine: (addons-097644) DBG |   <dns enable='no'/>
	I0127 14:05:44.106527 1013451 main.go:141] libmachine: (addons-097644) DBG |   
	I0127 14:05:44.106538 1013451 main.go:141] libmachine: (addons-097644) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0127 14:05:44.106549 1013451 main.go:141] libmachine: (addons-097644) DBG |     <dhcp>
	I0127 14:05:44.106558 1013451 main.go:141] libmachine: (addons-097644) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0127 14:05:44.106566 1013451 main.go:141] libmachine: (addons-097644) DBG |     </dhcp>
	I0127 14:05:44.106585 1013451 main.go:141] libmachine: (addons-097644) DBG |   </ip>
	I0127 14:05:44.106598 1013451 main.go:141] libmachine: (addons-097644) DBG |   
	I0127 14:05:44.106608 1013451 main.go:141] libmachine: (addons-097644) DBG | </network>
	I0127 14:05:44.106620 1013451 main.go:141] libmachine: (addons-097644) DBG | 
	I0127 14:05:44.112205 1013451 main.go:141] libmachine: (addons-097644) DBG | trying to create private KVM network mk-addons-097644 192.168.39.0/24...
	I0127 14:05:44.180056 1013451 main.go:141] libmachine: (addons-097644) DBG | private KVM network mk-addons-097644 192.168.39.0/24 created
	I0127 14:05:44.180144 1013451 main.go:141] libmachine: (addons-097644) setting up store path in /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644 ...
	I0127 14:05:44.180171 1013451 main.go:141] libmachine: (addons-097644) building disk image from file:///home/jenkins/minikube-integration/20321-1005652/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 14:05:44.180189 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:44.180124 1013473 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 14:05:44.180396 1013451 main.go:141] libmachine: (addons-097644) Downloading /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20321-1005652/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 14:05:44.489532 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:44.489354 1013473 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa...
	I0127 14:05:44.674691 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:44.674507 1013473 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/addons-097644.rawdisk...
	I0127 14:05:44.674726 1013451 main.go:141] libmachine: (addons-097644) DBG | Writing magic tar header
	I0127 14:05:44.674736 1013451 main.go:141] libmachine: (addons-097644) DBG | Writing SSH key tar header
	I0127 14:05:44.674747 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:44.674662 1013473 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644 ...
	I0127 14:05:44.674836 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644
	I0127 14:05:44.674866 1013451 main.go:141] libmachine: (addons-097644) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644 (perms=drwx------)
	I0127 14:05:44.674877 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines
	I0127 14:05:44.674890 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 14:05:44.674897 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652
	I0127 14:05:44.674908 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 14:05:44.674915 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home/jenkins
	I0127 14:05:44.674926 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home
	I0127 14:05:44.674933 1013451 main.go:141] libmachine: (addons-097644) DBG | skipping /home - not owner
	I0127 14:05:44.674963 1013451 main.go:141] libmachine: (addons-097644) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines (perms=drwxr-xr-x)
	I0127 14:05:44.674987 1013451 main.go:141] libmachine: (addons-097644) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube (perms=drwxr-xr-x)
	I0127 14:05:44.675015 1013451 main.go:141] libmachine: (addons-097644) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652 (perms=drwxrwxr-x)
	I0127 14:05:44.675025 1013451 main.go:141] libmachine: (addons-097644) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 14:05:44.675035 1013451 main.go:141] libmachine: (addons-097644) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 14:05:44.675040 1013451 main.go:141] libmachine: (addons-097644) creating domain...
	I0127 14:05:44.676087 1013451 main.go:141] libmachine: (addons-097644) define libvirt domain using xml: 
	I0127 14:05:44.676112 1013451 main.go:141] libmachine: (addons-097644) <domain type='kvm'>
	I0127 14:05:44.676119 1013451 main.go:141] libmachine: (addons-097644)   <name>addons-097644</name>
	I0127 14:05:44.676125 1013451 main.go:141] libmachine: (addons-097644)   <memory unit='MiB'>4000</memory>
	I0127 14:05:44.676133 1013451 main.go:141] libmachine: (addons-097644)   <vcpu>2</vcpu>
	I0127 14:05:44.676142 1013451 main.go:141] libmachine: (addons-097644)   <features>
	I0127 14:05:44.676170 1013451 main.go:141] libmachine: (addons-097644)     <acpi/>
	I0127 14:05:44.676190 1013451 main.go:141] libmachine: (addons-097644)     <apic/>
	I0127 14:05:44.676198 1013451 main.go:141] libmachine: (addons-097644)     <pae/>
	I0127 14:05:44.676204 1013451 main.go:141] libmachine: (addons-097644)     
	I0127 14:05:44.676219 1013451 main.go:141] libmachine: (addons-097644)   </features>
	I0127 14:05:44.676234 1013451 main.go:141] libmachine: (addons-097644)   <cpu mode='host-passthrough'>
	I0127 14:05:44.676256 1013451 main.go:141] libmachine: (addons-097644)   
	I0127 14:05:44.676274 1013451 main.go:141] libmachine: (addons-097644)   </cpu>
	I0127 14:05:44.676285 1013451 main.go:141] libmachine: (addons-097644)   <os>
	I0127 14:05:44.676290 1013451 main.go:141] libmachine: (addons-097644)     <type>hvm</type>
	I0127 14:05:44.676295 1013451 main.go:141] libmachine: (addons-097644)     <boot dev='cdrom'/>
	I0127 14:05:44.676302 1013451 main.go:141] libmachine: (addons-097644)     <boot dev='hd'/>
	I0127 14:05:44.676329 1013451 main.go:141] libmachine: (addons-097644)     <bootmenu enable='no'/>
	I0127 14:05:44.676352 1013451 main.go:141] libmachine: (addons-097644)   </os>
	I0127 14:05:44.676365 1013451 main.go:141] libmachine: (addons-097644)   <devices>
	I0127 14:05:44.676382 1013451 main.go:141] libmachine: (addons-097644)     <disk type='file' device='cdrom'>
	I0127 14:05:44.676400 1013451 main.go:141] libmachine: (addons-097644)       <source file='/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/boot2docker.iso'/>
	I0127 14:05:44.676411 1013451 main.go:141] libmachine: (addons-097644)       <target dev='hdc' bus='scsi'/>
	I0127 14:05:44.676436 1013451 main.go:141] libmachine: (addons-097644)       <readonly/>
	I0127 14:05:44.676446 1013451 main.go:141] libmachine: (addons-097644)     </disk>
	I0127 14:05:44.676457 1013451 main.go:141] libmachine: (addons-097644)     <disk type='file' device='disk'>
	I0127 14:05:44.676474 1013451 main.go:141] libmachine: (addons-097644)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 14:05:44.676491 1013451 main.go:141] libmachine: (addons-097644)       <source file='/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/addons-097644.rawdisk'/>
	I0127 14:05:44.676503 1013451 main.go:141] libmachine: (addons-097644)       <target dev='hda' bus='virtio'/>
	I0127 14:05:44.676512 1013451 main.go:141] libmachine: (addons-097644)     </disk>
	I0127 14:05:44.676523 1013451 main.go:141] libmachine: (addons-097644)     <interface type='network'>
	I0127 14:05:44.676535 1013451 main.go:141] libmachine: (addons-097644)       <source network='mk-addons-097644'/>
	I0127 14:05:44.676543 1013451 main.go:141] libmachine: (addons-097644)       <model type='virtio'/>
	I0127 14:05:44.676554 1013451 main.go:141] libmachine: (addons-097644)     </interface>
	I0127 14:05:44.676567 1013451 main.go:141] libmachine: (addons-097644)     <interface type='network'>
	I0127 14:05:44.676577 1013451 main.go:141] libmachine: (addons-097644)       <source network='default'/>
	I0127 14:05:44.676588 1013451 main.go:141] libmachine: (addons-097644)       <model type='virtio'/>
	I0127 14:05:44.676597 1013451 main.go:141] libmachine: (addons-097644)     </interface>
	I0127 14:05:44.676607 1013451 main.go:141] libmachine: (addons-097644)     <serial type='pty'>
	I0127 14:05:44.676615 1013451 main.go:141] libmachine: (addons-097644)       <target port='0'/>
	I0127 14:05:44.676624 1013451 main.go:141] libmachine: (addons-097644)     </serial>
	I0127 14:05:44.676638 1013451 main.go:141] libmachine: (addons-097644)     <console type='pty'>
	I0127 14:05:44.676650 1013451 main.go:141] libmachine: (addons-097644)       <target type='serial' port='0'/>
	I0127 14:05:44.676666 1013451 main.go:141] libmachine: (addons-097644)     </console>
	I0127 14:05:44.676678 1013451 main.go:141] libmachine: (addons-097644)     <rng model='virtio'>
	I0127 14:05:44.676688 1013451 main.go:141] libmachine: (addons-097644)       <backend model='random'>/dev/random</backend>
	I0127 14:05:44.676695 1013451 main.go:141] libmachine: (addons-097644)     </rng>
	I0127 14:05:44.676702 1013451 main.go:141] libmachine: (addons-097644)     
	I0127 14:05:44.676711 1013451 main.go:141] libmachine: (addons-097644)     
	I0127 14:05:44.676720 1013451 main.go:141] libmachine: (addons-097644)   </devices>
	I0127 14:05:44.676726 1013451 main.go:141] libmachine: (addons-097644) </domain>
	I0127 14:05:44.676788 1013451 main.go:141] libmachine: (addons-097644) 
	I0127 14:05:44.681531 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:bc:17:24 in network default
	I0127 14:05:44.682103 1013451 main.go:141] libmachine: (addons-097644) starting domain...
	I0127 14:05:44.682120 1013451 main.go:141] libmachine: (addons-097644) ensuring networks are active...
	I0127 14:05:44.682127 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:44.682898 1013451 main.go:141] libmachine: (addons-097644) Ensuring network default is active
	I0127 14:05:44.683272 1013451 main.go:141] libmachine: (addons-097644) Ensuring network mk-addons-097644 is active
	I0127 14:05:44.683705 1013451 main.go:141] libmachine: (addons-097644) getting domain XML...
	I0127 14:05:44.684437 1013451 main.go:141] libmachine: (addons-097644) creating domain...
	I0127 14:05:45.896162 1013451 main.go:141] libmachine: (addons-097644) waiting for IP...
	I0127 14:05:45.896892 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:45.897344 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:45.897436 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:45.897354 1013473 retry.go:31] will retry after 236.581088ms: waiting for domain to come up
	I0127 14:05:46.135836 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:46.136377 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:46.136409 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:46.136324 1013473 retry.go:31] will retry after 316.29449ms: waiting for domain to come up
	I0127 14:05:46.454651 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:46.455132 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:46.455160 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:46.455064 1013473 retry.go:31] will retry after 470.066632ms: waiting for domain to come up
	I0127 14:05:46.926708 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:46.927233 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:46.927260 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:46.927215 1013473 retry.go:31] will retry after 394.465051ms: waiting for domain to come up
	I0127 14:05:47.322830 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:47.323381 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:47.323413 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:47.323322 1013473 retry.go:31] will retry after 512.0087ms: waiting for domain to come up
	I0127 14:05:47.837180 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:47.837627 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:47.837654 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:47.837597 1013473 retry.go:31] will retry after 602.684619ms: waiting for domain to come up
	I0127 14:05:48.441447 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:48.441865 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:48.441895 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:48.441834 1013473 retry.go:31] will retry after 1.057148427s: waiting for domain to come up
	I0127 14:05:49.501034 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:49.501504 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:49.501527 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:49.501455 1013473 retry.go:31] will retry after 1.147761253s: waiting for domain to come up
	I0127 14:05:50.651314 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:50.651817 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:50.651882 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:50.651766 1013473 retry.go:31] will retry after 1.445396149s: waiting for domain to come up
	I0127 14:05:52.098809 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:52.099216 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:52.099250 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:52.099170 1013473 retry.go:31] will retry after 2.075111556s: waiting for domain to come up
	I0127 14:05:54.175631 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:54.176081 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:54.176131 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:54.176071 1013473 retry.go:31] will retry after 1.984245215s: waiting for domain to come up
	I0127 14:05:56.163386 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:56.163785 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:56.163814 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:56.163743 1013473 retry.go:31] will retry after 2.265903927s: waiting for domain to come up
	I0127 14:05:58.432199 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:58.432532 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:58.432610 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:58.432499 1013473 retry.go:31] will retry after 4.367217291s: waiting for domain to come up
	I0127 14:06:02.802210 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:02.802571 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:06:02.802600 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:06:02.802549 1013473 retry.go:31] will retry after 3.598012851s: waiting for domain to come up
	I0127 14:06:06.403574 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.404009 1013451 main.go:141] libmachine: (addons-097644) found domain IP: 192.168.39.228
	I0127 14:06:06.404030 1013451 main.go:141] libmachine: (addons-097644) reserving static IP address...
	I0127 14:06:06.404042 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has current primary IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.404496 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find host DHCP lease matching {name: "addons-097644", mac: "52:54:00:9d:d4:27", ip: "192.168.39.228"} in network mk-addons-097644
	I0127 14:06:06.482117 1013451 main.go:141] libmachine: (addons-097644) reserved static IP address 192.168.39.228 for domain addons-097644
	I0127 14:06:06.482150 1013451 main.go:141] libmachine: (addons-097644) DBG | Getting to WaitForSSH function...
	I0127 14:06:06.482159 1013451 main.go:141] libmachine: (addons-097644) waiting for SSH...
	I0127 14:06:06.484542 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.484916 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:06.484946 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.485093 1013451 main.go:141] libmachine: (addons-097644) DBG | Using SSH client type: external
	I0127 14:06:06.485123 1013451 main.go:141] libmachine: (addons-097644) DBG | Using SSH private key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa (-rw-------)
	I0127 14:06:06.485171 1013451 main.go:141] libmachine: (addons-097644) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 14:06:06.485189 1013451 main.go:141] libmachine: (addons-097644) DBG | About to run SSH command:
	I0127 14:06:06.485232 1013451 main.go:141] libmachine: (addons-097644) DBG | exit 0
	I0127 14:06:06.609772 1013451 main.go:141] libmachine: (addons-097644) DBG | SSH cmd err, output: <nil>: 
	I0127 14:06:06.610069 1013451 main.go:141] libmachine: (addons-097644) KVM machine creation complete
	I0127 14:06:06.610555 1013451 main.go:141] libmachine: (addons-097644) Calling .GetConfigRaw
	I0127 14:06:06.611165 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:06.611373 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:06.611586 1013451 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 14:06:06.611621 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:06.613057 1013451 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 14:06:06.613073 1013451 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 14:06:06.613081 1013451 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 14:06:06.613090 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:06.615644 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.616035 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:06.616063 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.616199 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:06.616362 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.616508 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.616657 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:06.616824 1013451 main.go:141] libmachine: Using SSH client type: native
	I0127 14:06:06.617054 1013451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 14:06:06.617068 1013451 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 14:06:06.716630 1013451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:06:06.716673 1013451 main.go:141] libmachine: Detecting the provisioner...
	I0127 14:06:06.716681 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:06.719631 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.719945 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:06.719967 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.720264 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:06.720503 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.720685 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.720841 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:06.721000 1013451 main.go:141] libmachine: Using SSH client type: native
	I0127 14:06:06.721236 1013451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 14:06:06.721251 1013451 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 14:06:06.826035 1013451 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 14:06:06.826137 1013451 main.go:141] libmachine: found compatible host: buildroot
	I0127 14:06:06.826152 1013451 main.go:141] libmachine: Provisioning with buildroot...
	I0127 14:06:06.826166 1013451 main.go:141] libmachine: (addons-097644) Calling .GetMachineName
	I0127 14:06:06.826460 1013451 buildroot.go:166] provisioning hostname "addons-097644"
	I0127 14:06:06.826496 1013451 main.go:141] libmachine: (addons-097644) Calling .GetMachineName
	I0127 14:06:06.826730 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:06.829265 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.829710 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:06.829746 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.829916 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:06.830136 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.830299 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.830442 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:06.830601 1013451 main.go:141] libmachine: Using SSH client type: native
	I0127 14:06:06.830779 1013451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 14:06:06.830790 1013451 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-097644 && echo "addons-097644" | sudo tee /etc/hostname
	I0127 14:06:06.943475 1013451 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-097644
	
	I0127 14:06:06.943511 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:06.946454 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.946884 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:06.946916 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.947078 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:06.947278 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.947449 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.947589 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:06.947760 1013451 main.go:141] libmachine: Using SSH client type: native
	I0127 14:06:06.947980 1013451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 14:06:06.948004 1013451 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-097644' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-097644/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-097644' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:06:07.054387 1013451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:06:07.054446 1013451 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20321-1005652/.minikube CaCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20321-1005652/.minikube}
	I0127 14:06:07.054503 1013451 buildroot.go:174] setting up certificates
	I0127 14:06:07.054527 1013451 provision.go:84] configureAuth start
	I0127 14:06:07.054547 1013451 main.go:141] libmachine: (addons-097644) Calling .GetMachineName
	I0127 14:06:07.054845 1013451 main.go:141] libmachine: (addons-097644) Calling .GetIP
	I0127 14:06:07.057428 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.057824 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.057852 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.057989 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.060187 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.060520 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.060546 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.060713 1013451 provision.go:143] copyHostCerts
	I0127 14:06:07.060793 1013451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem (1078 bytes)
	I0127 14:06:07.060906 1013451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem (1123 bytes)
	I0127 14:06:07.060974 1013451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem (1679 bytes)
	I0127 14:06:07.061053 1013451 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem org=jenkins.addons-097644 san=[127.0.0.1 192.168.39.228 addons-097644 localhost minikube]
	I0127 14:06:07.171259 1013451 provision.go:177] copyRemoteCerts
	I0127 14:06:07.171332 1013451 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:06:07.171359 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.173936 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.174300 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.174345 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.174507 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:07.174718 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.174901 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:07.175049 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:07.256072 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:06:07.280263 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 14:06:07.304563 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 14:06:07.328463 1013451 provision.go:87] duration metric: took 273.91293ms to configureAuth
	I0127 14:06:07.328503 1013451 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:06:07.328710 1013451 config.go:182] Loaded profile config "addons-097644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:06:07.328812 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.331515 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.331824 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.331855 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.332095 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:07.332304 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.332494 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.332664 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:07.332827 1013451 main.go:141] libmachine: Using SSH client type: native
	I0127 14:06:07.333034 1013451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 14:06:07.333056 1013451 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 14:06:07.551437 1013451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 14:06:07.551470 1013451 main.go:141] libmachine: Checking connection to Docker...
	I0127 14:06:07.551481 1013451 main.go:141] libmachine: (addons-097644) Calling .GetURL
	I0127 14:06:07.552717 1013451 main.go:141] libmachine: (addons-097644) DBG | using libvirt version 6000000
	I0127 14:06:07.554862 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.555265 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.555309 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.555465 1013451 main.go:141] libmachine: Docker is up and running!
	I0127 14:06:07.555482 1013451 main.go:141] libmachine: Reticulating splines...
	I0127 14:06:07.555493 1013451 client.go:171] duration metric: took 23.685342954s to LocalClient.Create
	I0127 14:06:07.555525 1013451 start.go:167] duration metric: took 23.68541238s to libmachine.API.Create "addons-097644"
	I0127 14:06:07.555552 1013451 start.go:293] postStartSetup for "addons-097644" (driver="kvm2")
	I0127 14:06:07.555570 1013451 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:06:07.555596 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:07.555863 1013451 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:06:07.555889 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.557878 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.558160 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.558198 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.558312 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:07.558488 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.558664 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:07.558817 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:07.640270 1013451 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:06:07.644537 1013451 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:06:07.644585 1013451 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/addons for local assets ...
	I0127 14:06:07.644664 1013451 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/files for local assets ...
	I0127 14:06:07.644692 1013451 start.go:296] duration metric: took 89.13009ms for postStartSetup
	I0127 14:06:07.644732 1013451 main.go:141] libmachine: (addons-097644) Calling .GetConfigRaw
	I0127 14:06:07.645370 1013451 main.go:141] libmachine: (addons-097644) Calling .GetIP
	I0127 14:06:07.648039 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.648405 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.648434 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.648695 1013451 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/config.json ...
	I0127 14:06:07.648902 1013451 start.go:128] duration metric: took 23.797703895s to createHost
	I0127 14:06:07.648927 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.651100 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.651434 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.651481 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.651607 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:07.651822 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.651975 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.652136 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:07.652325 1013451 main.go:141] libmachine: Using SSH client type: native
	I0127 14:06:07.652538 1013451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 14:06:07.652554 1013451 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:06:07.750310 1013451 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737986767.722256723
	
	I0127 14:06:07.750337 1013451 fix.go:216] guest clock: 1737986767.722256723
	I0127 14:06:07.750344 1013451 fix.go:229] Guest: 2025-01-27 14:06:07.722256723 +0000 UTC Remote: 2025-01-27 14:06:07.648915936 +0000 UTC m=+23.906997834 (delta=73.340787ms)
	I0127 14:06:07.750387 1013451 fix.go:200] guest clock delta is within tolerance: 73.340787ms
	I0127 14:06:07.750393 1013451 start.go:83] releasing machines lock for "addons-097644", held for 23.899285781s
	I0127 14:06:07.750420 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:07.750687 1013451 main.go:141] libmachine: (addons-097644) Calling .GetIP
	I0127 14:06:07.753394 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.753884 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.753910 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.754016 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:07.754573 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:07.754725 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:07.754834 1013451 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:06:07.754900 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.754942 1013451 ssh_runner.go:195] Run: cat /version.json
	I0127 14:06:07.754971 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.757717 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.757761 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.758110 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.758137 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.758171 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.758187 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.758397 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:07.758407 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:07.758616 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.758632 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.758733 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:07.758790 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:07.758889 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:07.758968 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:07.862665 1013451 ssh_runner.go:195] Run: systemctl --version
	I0127 14:06:07.869339 1013451 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 14:06:08.030804 1013451 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:06:08.038146 1013451 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:06:08.038222 1013451 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:06:08.055525 1013451 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:06:08.055564 1013451 start.go:495] detecting cgroup driver to use...
	I0127 14:06:08.055650 1013451 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 14:06:08.072349 1013451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 14:06:08.087838 1013451 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:06:08.087904 1013451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:06:08.103124 1013451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:06:08.119044 1013451 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:06:08.243455 1013451 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:06:08.410960 1013451 docker.go:233] disabling docker service ...
	I0127 14:06:08.411040 1013451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:06:08.425578 1013451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:06:08.438593 1013451 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:06:08.564242 1013451 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:06:08.678221 1013451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:06:08.692806 1013451 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:06:08.713320 1013451 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 14:06:08.713400 1013451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.724369 1013451 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 14:06:08.724451 1013451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.735585 1013451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.746053 1013451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.756606 1013451 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:06:08.767332 1013451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.777994 1013451 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.795855 1013451 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.806376 1013451 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:06:08.815691 1013451 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:06:08.815764 1013451 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:06:08.828215 1013451 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:06:08.837677 1013451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:06:08.971639 1013451 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 14:06:09.063916 1013451 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 14:06:09.064038 1013451 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 14:06:09.069097 1013451 start.go:563] Will wait 60s for crictl version
	I0127 14:06:09.069188 1013451 ssh_runner.go:195] Run: which crictl
	I0127 14:06:09.073113 1013451 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:06:09.113259 1013451 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 14:06:09.113366 1013451 ssh_runner.go:195] Run: crio --version
	I0127 14:06:09.142504 1013451 ssh_runner.go:195] Run: crio --version
	I0127 14:06:09.173583 1013451 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 14:06:09.174862 1013451 main.go:141] libmachine: (addons-097644) Calling .GetIP
	I0127 14:06:09.177395 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:09.177812 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:09.177839 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:09.178071 1013451 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 14:06:09.182188 1013451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:06:09.194695 1013451 kubeadm.go:883] updating cluster {Name:addons-097644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-097644 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:06:09.194860 1013451 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:06:09.194924 1013451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:06:09.227895 1013451 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 14:06:09.227979 1013451 ssh_runner.go:195] Run: which lz4
	I0127 14:06:09.232384 1013451 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 14:06:09.236534 1013451 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 14:06:09.236573 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 14:06:10.668374 1013451 crio.go:462] duration metric: took 1.436016004s to copy over tarball
	I0127 14:06:10.668456 1013451 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 14:06:12.991225 1013451 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.322734481s)
	I0127 14:06:12.991265 1013451 crio.go:469] duration metric: took 2.322855117s to extract the tarball
	I0127 14:06:12.991298 1013451 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 14:06:13.029341 1013451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:06:13.076231 1013451 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 14:06:13.076261 1013451 cache_images.go:84] Images are preloaded, skipping loading
	I0127 14:06:13.076271 1013451 kubeadm.go:934] updating node { 192.168.39.228 8443 v1.32.1 crio true true} ...
	I0127 14:06:13.076414 1013451 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-097644 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-097644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:06:13.076504 1013451 ssh_runner.go:195] Run: crio config
	I0127 14:06:13.126305 1013451 cni.go:84] Creating CNI manager for ""
	I0127 14:06:13.126332 1013451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:06:13.126348 1013451 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:06:13.126373 1013451 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.228 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-097644 NodeName:addons-097644 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 14:06:13.126544 1013451 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-097644"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.228"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.228"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:06:13.126625 1013451 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 14:06:13.136556 1013451 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:06:13.136615 1013451 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:06:13.146362 1013451 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0127 14:06:13.163788 1013451 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:06:13.180741 1013451 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0127 14:06:13.198243 1013451 ssh_runner.go:195] Run: grep 192.168.39.228	control-plane.minikube.internal$ /etc/hosts
	I0127 14:06:13.202384 1013451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:06:13.214765 1013451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:06:13.343136 1013451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:06:13.360886 1013451 certs.go:68] Setting up /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644 for IP: 192.168.39.228
	I0127 14:06:13.360930 1013451 certs.go:194] generating shared ca certs ...
	I0127 14:06:13.360952 1013451 certs.go:226] acquiring lock for ca certs: {Name:mk0f815247e4bef2bde3ddcae95639d1cd9cab24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.361149 1013451 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key
	I0127 14:06:13.420822 1013451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt ...
	I0127 14:06:13.420879 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt: {Name:mkc9e8d9cd31bad89b914a0e39146cbc4cb9a566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.421227 1013451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key ...
	I0127 14:06:13.421256 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key: {Name:mk54337b6f7f11134a1a57c50e00b3a25a5764c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.421401 1013451 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key
	I0127 14:06:13.671791 1013451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt ...
	I0127 14:06:13.671827 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt: {Name:mkdf635bff813871fb0a8f71a2bc8202826329c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.672076 1013451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key ...
	I0127 14:06:13.672097 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key: {Name:mkb62b21eecb2941c4e1d8ed131c001defc5b97f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.672212 1013451 certs.go:256] generating profile certs ...
	I0127 14:06:13.672327 1013451 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.key
	I0127 14:06:13.672363 1013451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt with IP's: []
	I0127 14:06:13.991379 1013451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt ...
	I0127 14:06:13.991415 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: {Name:mk7115664fd0816a20da8202516a46d36538c4b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.991616 1013451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.key ...
	I0127 14:06:13.991638 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.key: {Name:mkbc457d424e6b80c2d9c2572cbd34113ffac2c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.991748 1013451 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.key.dc53463b
	I0127 14:06:13.991771 1013451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.crt.dc53463b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.228]
	I0127 14:06:14.087652 1013451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.crt.dc53463b ...
	I0127 14:06:14.087693 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.crt.dc53463b: {Name:mk22529933d8ca851610043569adad4d85cdb151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:14.087885 1013451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.key.dc53463b ...
	I0127 14:06:14.087904 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.key.dc53463b: {Name:mk9f9822d6229d3d1127240b0286c22fc9ac2b51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:14.088018 1013451 certs.go:381] copying /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.crt.dc53463b -> /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.crt
	I0127 14:06:14.088115 1013451 certs.go:385] copying /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.key.dc53463b -> /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.key
	I0127 14:06:14.088186 1013451 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.key
	I0127 14:06:14.088214 1013451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.crt with IP's: []
	I0127 14:06:14.315571 1013451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.crt ...
	I0127 14:06:14.315616 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.crt: {Name:mkf7f0dd114b37a403559f311ca206dc0dfaf354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:14.315850 1013451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.key ...
	I0127 14:06:14.315872 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.key: {Name:mk7c251de1f033a991791c5bacc6c6b2e96630a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:14.316112 1013451 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 14:06:14.316168 1013451 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:06:14.316208 1013451 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:06:14.316249 1013451 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem (1679 bytes)
	I0127 14:06:14.317102 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:06:14.347128 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 14:06:14.372136 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:06:14.397562 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 14:06:14.422996 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 14:06:14.448211 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 14:06:14.474009 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:06:14.501190 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 14:06:14.526766 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:06:14.552500 1013451 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:06:14.570395 1013451 ssh_runner.go:195] Run: openssl version
	I0127 14:06:14.576450 1013451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:06:14.588501 1013451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:06:14.593391 1013451 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 14:06 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:06:14.593460 1013451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:06:14.599581 1013451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:06:14.612023 1013451 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:06:14.616483 1013451 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 14:06:14.616554 1013451 kubeadm.go:392] StartCluster: {Name:addons-097644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-097644 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:06:14.616661 1013451 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 14:06:14.616711 1013451 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:06:14.653932 1013451 cri.go:89] found id: ""
	I0127 14:06:14.654019 1013451 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:06:14.665367 1013451 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:06:14.675999 1013451 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:06:14.686503 1013451 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:06:14.686529 1013451 kubeadm.go:157] found existing configuration files:
	
	I0127 14:06:14.686587 1013451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:06:14.696362 1013451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:06:14.696421 1013451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:06:14.706997 1013451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:06:14.717082 1013451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:06:14.717154 1013451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:06:14.727528 1013451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:06:14.737554 1013451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:06:14.737625 1013451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:06:14.748328 1013451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:06:14.758305 1013451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:06:14.758388 1013451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:06:14.768545 1013451 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:06:14.824105 1013451 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 14:06:14.824161 1013451 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:06:14.954367 1013451 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:06:14.954546 1013451 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:06:14.954688 1013451 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 14:06:14.966475 1013451 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:06:15.100500 1013451 out.go:235]   - Generating certificates and keys ...
	I0127 14:06:15.100639 1013451 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:06:15.100710 1013451 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:06:15.100827 1013451 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 14:06:15.512511 1013451 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 14:06:15.776387 1013451 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 14:06:16.241691 1013451 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 14:06:16.495803 1013451 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 14:06:16.496119 1013451 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-097644 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I0127 14:06:16.692825 1013451 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 14:06:16.693029 1013451 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-097644 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I0127 14:06:16.951084 1013451 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 14:06:17.150130 1013451 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 14:06:17.461000 1013451 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 14:06:17.461403 1013451 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:06:17.774344 1013451 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:06:18.080863 1013451 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 14:06:18.696649 1013451 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:06:18.826173 1013451 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:06:18.926775 1013451 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:06:18.928106 1013451 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:06:18.932397 1013451 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:06:18.934351 1013451 out.go:235]   - Booting up control plane ...
	I0127 14:06:18.934472 1013451 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:06:18.934569 1013451 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:06:18.934649 1013451 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:06:18.950262 1013451 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:06:18.956527 1013451 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:06:18.956606 1013451 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:06:19.083734 1013451 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 14:06:19.083865 1013451 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 14:06:20.084411 1013451 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001422431s
	I0127 14:06:20.084523 1013451 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 14:06:25.084312 1013451 kubeadm.go:310] [api-check] The API server is healthy after 5.002685853s
	I0127 14:06:25.096890 1013451 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 14:06:25.113838 1013451 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 14:06:25.145234 1013451 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 14:06:25.145454 1013451 kubeadm.go:310] [mark-control-plane] Marking the node addons-097644 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 14:06:25.158810 1013451 kubeadm.go:310] [bootstrap-token] Using token: eelxhi.iqqoealhyjynagyr
	I0127 14:06:25.160144 1013451 out.go:235]   - Configuring RBAC rules ...
	I0127 14:06:25.160292 1013451 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 14:06:25.166578 1013451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 14:06:25.179189 1013451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 14:06:25.182767 1013451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 14:06:25.186739 1013451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 14:06:25.193800 1013451 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 14:06:25.491524 1013451 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 14:06:25.946419 1013451 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 14:06:26.491307 1013451 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 14:06:26.491353 1013451 kubeadm.go:310] 
	I0127 14:06:26.491436 1013451 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 14:06:26.491446 1013451 kubeadm.go:310] 
	I0127 14:06:26.491581 1013451 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 14:06:26.491591 1013451 kubeadm.go:310] 
	I0127 14:06:26.491622 1013451 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 14:06:26.491706 1013451 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 14:06:26.491763 1013451 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 14:06:26.491771 1013451 kubeadm.go:310] 
	I0127 14:06:26.491815 1013451 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 14:06:26.491823 1013451 kubeadm.go:310] 
	I0127 14:06:26.491902 1013451 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 14:06:26.491927 1013451 kubeadm.go:310] 
	I0127 14:06:26.491976 1013451 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 14:06:26.492050 1013451 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 14:06:26.492110 1013451 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 14:06:26.492120 1013451 kubeadm.go:310] 
	I0127 14:06:26.492192 1013451 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 14:06:26.492266 1013451 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 14:06:26.492279 1013451 kubeadm.go:310] 
	I0127 14:06:26.492347 1013451 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token eelxhi.iqqoealhyjynagyr \
	I0127 14:06:26.492435 1013451 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 \
	I0127 14:06:26.492455 1013451 kubeadm.go:310] 	--control-plane 
	I0127 14:06:26.492462 1013451 kubeadm.go:310] 
	I0127 14:06:26.492535 1013451 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 14:06:26.492542 1013451 kubeadm.go:310] 
	I0127 14:06:26.492655 1013451 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token eelxhi.iqqoealhyjynagyr \
	I0127 14:06:26.492807 1013451 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 
	I0127 14:06:26.493374 1013451 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:06:26.493713 1013451 cni.go:84] Creating CNI manager for ""
	I0127 14:06:26.493730 1013451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:06:26.495461 1013451 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:06:26.496737 1013451 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:06:26.508895 1013451 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:06:26.531487 1013451 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:06:26.531595 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-097644 minikube.k8s.io/updated_at=2025_01_27T14_06_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743 minikube.k8s.io/name=addons-097644 minikube.k8s.io/primary=true
	I0127 14:06:26.531600 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:26.660204 1013451 ops.go:34] apiserver oom_adj: -16
	I0127 14:06:26.660344 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:27.161225 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:27.660827 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:28.161152 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:28.661068 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:29.160473 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:29.661076 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:30.161022 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:30.660596 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:30.801320 1013451 kubeadm.go:1113] duration metric: took 4.269789638s to wait for elevateKubeSystemPrivileges
	I0127 14:06:30.801428 1013451 kubeadm.go:394] duration metric: took 16.184866129s to StartCluster
	I0127 14:06:30.801479 1013451 settings.go:142] acquiring lock: {Name:mk76722a8563186e0a733747d866232f054026c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:30.801625 1013451 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 14:06:30.802052 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:30.802521 1013451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 14:06:30.802558 1013451 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:06:30.802614 1013451 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0127 14:06:30.802733 1013451 addons.go:69] Setting yakd=true in profile "addons-097644"
	I0127 14:06:30.802749 1013451 addons.go:69] Setting inspektor-gadget=true in profile "addons-097644"
	I0127 14:06:30.802771 1013451 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-097644"
	I0127 14:06:30.802767 1013451 addons.go:69] Setting default-storageclass=true in profile "addons-097644"
	I0127 14:06:30.802782 1013451 addons.go:238] Setting addon inspektor-gadget=true in "addons-097644"
	I0127 14:06:30.802787 1013451 addons.go:69] Setting registry=true in profile "addons-097644"
	I0127 14:06:30.802789 1013451 config.go:182] Loaded profile config "addons-097644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:06:30.802795 1013451 addons.go:69] Setting ingress=true in profile "addons-097644"
	I0127 14:06:30.802809 1013451 addons.go:69] Setting volcano=true in profile "addons-097644"
	I0127 14:06:30.802819 1013451 addons.go:238] Setting addon ingress=true in "addons-097644"
	I0127 14:06:30.802820 1013451 addons.go:238] Setting addon volcano=true in "addons-097644"
	I0127 14:06:30.802827 1013451 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-097644"
	I0127 14:06:30.802840 1013451 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-097644"
	I0127 14:06:30.802851 1013451 addons.go:69] Setting cloud-spanner=true in profile "addons-097644"
	I0127 14:06:30.802867 1013451 addons.go:238] Setting addon cloud-spanner=true in "addons-097644"
	I0127 14:06:30.802875 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.802797 1013451 addons.go:238] Setting addon registry=true in "addons-097644"
	I0127 14:06:30.802879 1013451 addons.go:69] Setting volumesnapshots=true in profile "addons-097644"
	I0127 14:06:30.802883 1013451 addons.go:69] Setting gcp-auth=true in profile "addons-097644"
	I0127 14:06:30.802895 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.802901 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.802905 1013451 addons.go:238] Setting addon volumesnapshots=true in "addons-097644"
	I0127 14:06:30.802916 1013451 mustload.go:65] Loading cluster: addons-097644
	I0127 14:06:30.802923 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.803032 1013451 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-097644"
	I0127 14:06:30.803073 1013451 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-097644"
	I0127 14:06:30.803102 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.803126 1013451 config.go:182] Loaded profile config "addons-097644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:06:30.802805 1013451 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-097644"
	I0127 14:06:30.803177 1013451 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-097644"
	I0127 14:06:30.803393 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.803444 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.803447 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.802869 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.803474 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.803497 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.803523 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.803613 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.803651 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.803721 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.803736 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.803760 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.803765 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.802783 1013451 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-097644"
	I0127 14:06:30.803814 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.803871 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.802818 1013451 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-097644"
	I0127 14:06:30.802800 1013451 addons.go:69] Setting storage-provisioner=true in profile "addons-097644"
	I0127 14:06:30.804156 1013451 addons.go:238] Setting addon storage-provisioner=true in "addons-097644"
	I0127 14:06:30.804206 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.804439 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.804477 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.802876 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.804686 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.804708 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.803834 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.802767 1013451 addons.go:69] Setting metrics-server=true in profile "addons-097644"
	I0127 14:06:30.804942 1013451 addons.go:238] Setting addon metrics-server=true in "addons-097644"
	I0127 14:06:30.804972 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.805340 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.805359 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.805372 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.805400 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.813160 1013451 out.go:177] * Verifying Kubernetes components...
	I0127 14:06:30.802876 1013451 addons.go:69] Setting ingress-dns=true in profile "addons-097644"
	I0127 14:06:30.813527 1013451 addons.go:238] Setting addon ingress-dns=true in "addons-097644"
	I0127 14:06:30.813587 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.814019 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.814072 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.802762 1013451 addons.go:238] Setting addon yakd=true in "addons-097644"
	I0127 14:06:30.814349 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.814935 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.814996 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.815147 1013451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:06:30.802869 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.804130 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.815331 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.824258 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33327
	I0127 14:06:30.825578 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44489
	I0127 14:06:30.829296 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I0127 14:06:30.829387 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40093
	I0127 14:06:30.829572 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.829610 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.829612 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.829656 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.831082 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.831098 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.831220 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.831225 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.831765 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.831788 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.831892 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.831912 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.832037 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.832062 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.832195 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.832345 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.832357 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.832802 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.832840 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.833353 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.833374 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.833419 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.833641 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.834032 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.834058 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.834072 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.834105 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.838453 1013451 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-097644"
	I0127 14:06:30.838522 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.838935 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.838995 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.840603 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44355
	I0127 14:06:30.843186 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46385
	I0127 14:06:30.843795 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.844312 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.844326 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.844777 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.844960 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.849282 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.849730 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.849777 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.863460 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44373
	I0127 14:06:30.864087 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.864757 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.864784 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.865181 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.865783 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.865833 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.873911 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.874553 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I0127 14:06:30.874638 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.874658 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.875026 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.875592 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.875633 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.876937 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45057
	I0127 14:06:30.877116 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44787
	I0127 14:06:30.877252 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.878004 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.878029 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.878487 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.879164 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.879208 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.879477 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41067
	I0127 14:06:30.879682 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34461
	I0127 14:06:30.880336 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.880358 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41223
	I0127 14:06:30.880765 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.881119 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.881138 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.881232 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I0127 14:06:30.881435 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.881449 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.881871 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.881945 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.881977 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34179
	I0127 14:06:30.882565 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.882610 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.882853 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.883356 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.883373 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.883436 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.883527 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.883562 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.883847 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.883908 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.884462 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.884501 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.884735 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.884897 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.884907 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.885047 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.885329 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.885475 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.885487 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.885686 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.885815 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.885828 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.885886 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.886415 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.886456 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.886895 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.886966 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.886997 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.887517 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.887560 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.887602 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.887813 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.890600 1013451 addons.go:238] Setting addon default-storageclass=true in "addons-097644"
	I0127 14:06:30.890648 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.890997 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.891046 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.891842 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.894240 1013451 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0127 14:06:30.894842 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I0127 14:06:30.895286 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.895416 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0127 14:06:30.895847 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.895866 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.896029 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.896491 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.896510 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.896934 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.897068 1013451 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 14:06:30.897222 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.898593 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46541
	I0127 14:06:30.899242 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38531
	I0127 14:06:30.899629 1013451 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 14:06:30.899790 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.899976 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.900109 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.900506 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.900557 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.900630 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.900646 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.900769 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.900778 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.901107 1013451 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 14:06:30.901132 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0127 14:06:30.901138 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.901155 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.901326 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.903634 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.904294 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.906030 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.906143 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.906825 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.906847 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.907181 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.907365 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.907455 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.907556 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.907888 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.908168 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.910334 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0127 14:06:30.910342 1013451 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0127 14:06:30.912373 1013451 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0127 14:06:30.912395 1013451 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0127 14:06:30.912423 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.912492 1013451 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0127 14:06:30.912507 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0127 14:06:30.912528 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.916227 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.916724 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.916749 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.916943 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.917159 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.917417 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.917631 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.917987 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.918511 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.918550 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.918760 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.918938 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.919079 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.919222 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.923687 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44889
	I0127 14:06:30.924139 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.924940 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.924966 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.925060 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36471
	I0127 14:06:30.925654 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.926360 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.926379 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.926947 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.927207 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.928312 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.929602 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.930009 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:30.930023 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:30.932400 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:30.932438 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:30.932446 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:30.932454 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:30.932461 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:30.932905 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:30.932938 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:30.932946 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	W0127 14:06:30.933068 1013451 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0127 14:06:30.933415 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.935674 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
	I0127 14:06:30.935720 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39701
	I0127 14:06:30.935830 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44721
	I0127 14:06:30.936334 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.936432 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.936950 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.936971 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.937146 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.937165 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.937592 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.937657 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0127 14:06:30.937811 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.938038 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.938478 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.938564 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.938581 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.938719 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.938993 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.939067 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43135
	I0127 14:06:30.939447 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.940030 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.940054 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.940132 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.940643 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.940690 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.941538 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.941561 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.941618 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.941662 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.942168 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.942229 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.942674 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35903
	I0127 14:06:30.942829 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.942877 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.943179 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.943303 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.943656 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.943677 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.944080 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0127 14:06:30.944110 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.944168 1013451 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0127 14:06:30.944396 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.944907 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40807
	I0127 14:06:30.945729 1013451 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 14:06:30.945746 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0127 14:06:30.945767 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.947021 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0127 14:06:30.947720 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42045
	I0127 14:06:30.947740 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.947803 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37815
	I0127 14:06:30.948506 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.948668 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.948768 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.949312 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.949184 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.949424 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.949777 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.949798 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.949814 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.949830 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.949831 1013451 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:06:30.950788 1013451 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0127 14:06:30.949879 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0127 14:06:30.950166 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.950190 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.951652 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.951908 1013451 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:06:30.951930 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:06:30.951955 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.952269 1013451 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 14:06:30.952290 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0127 14:06:30.952314 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.952564 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.952635 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36703
	I0127 14:06:30.952847 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.953218 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.953829 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.953849 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.953949 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.954442 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.954245 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0127 14:06:30.954648 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.957753 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0127 14:06:30.957955 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.958028 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.958064 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.958865 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.958661 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.958740 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.959195 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.959217 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.959357 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.959389 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.959494 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.959717 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.959903 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.960115 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.960228 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.960239 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.960472 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.960534 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.960555 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.960505 1013451 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0127 14:06:30.960521 1013451 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0127 14:06:30.960696 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.960722 1013451 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.36.0
	I0127 14:06:30.960806 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.961484 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0127 14:06:30.962231 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.962333 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.962472 1013451 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0127 14:06:30.962490 1013451 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 14:06:30.962854 1013451 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 14:06:30.962875 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.962916 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39873
	I0127 14:06:30.962788 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.963147 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.963248 1013451 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0127 14:06:30.963288 1013451 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0127 14:06:30.963312 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.963411 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.963647 1013451 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 14:06:30.963669 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0127 14:06:30.963686 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.964105 1013451 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0127 14:06:30.964126 1013451 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0127 14:06:30.964145 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.964611 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.964641 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.965199 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.965450 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.965974 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0127 14:06:30.967214 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0127 14:06:30.967970 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.968624 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0127 14:06:30.968647 1013451 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0127 14:06:30.968669 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.968879 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.969411 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.969574 1013451 out.go:177]   - Using image docker.io/registry:2.8.3
	I0127 14:06:30.969589 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.969904 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42225
	I0127 14:06:30.969929 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.969945 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.970191 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.970321 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.970337 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.970367 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.970441 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.970532 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.970725 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.971134 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.971167 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.971138 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.971183 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.971292 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.971326 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.971354 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.971404 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.971423 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.971578 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.971627 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.971673 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.971859 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.971884 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.971921 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.971936 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.971961 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.972328 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.972505 1013451 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0127 14:06:30.972529 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.972896 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.973056 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.973650 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.973898 1013451 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:06:30.973918 1013451 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:06:30.973937 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.974033 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.974299 1013451 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0127 14:06:30.974313 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0127 14:06:30.974330 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.974535 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.974560 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.974828 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.975014 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.975139 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.975250 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	W0127 14:06:30.976492 1013451 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:45740->192.168.39.228:22: read: connection reset by peer
	I0127 14:06:30.976527 1013451 retry.go:31] will retry after 249.98777ms: ssh: handshake failed: read tcp 192.168.39.1:45740->192.168.39.228:22: read: connection reset by peer
	I0127 14:06:30.977856 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.977979 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.978359 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.978399 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.978592 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.978603 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.978618 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.978798 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.978858 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.978981 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.979003 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.979124 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.979153 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.979292 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	W0127 14:06:30.980391 1013451 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:45758->192.168.39.228:22: read: connection reset by peer
	I0127 14:06:30.980418 1013451 retry.go:31] will retry after 282.19412ms: ssh: handshake failed: read tcp 192.168.39.1:45758->192.168.39.228:22: read: connection reset by peer
	I0127 14:06:30.986758 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I0127 14:06:30.987211 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.987797 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.987824 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.988141 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.988375 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.990245 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.992302 1013451 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0127 14:06:30.993765 1013451 out.go:177]   - Using image docker.io/busybox:stable
	I0127 14:06:30.995107 1013451 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 14:06:30.995123 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0127 14:06:30.995143 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.998641 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.999124 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.999163 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.999454 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.999690 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.999838 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:31.000028 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:31.232253 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0127 14:06:31.331831 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 14:06:31.347794 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 14:06:31.426357 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 14:06:31.491578 1013451 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0127 14:06:31.491606 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0127 14:06:31.512213 1013451 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0127 14:06:31.512250 1013451 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0127 14:06:31.515355 1013451 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 14:06:31.515377 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0127 14:06:31.516574 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 14:06:31.525098 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 14:06:31.533157 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:06:31.559468 1013451 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0127 14:06:31.559521 1013451 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0127 14:06:31.575968 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:06:31.648773 1013451 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0127 14:06:31.648804 1013451 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0127 14:06:31.655677 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0127 14:06:31.655706 1013451 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0127 14:06:31.683163 1013451 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 14:06:31.683200 1013451 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 14:06:31.694871 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0127 14:06:31.704356 1013451 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0127 14:06:31.704382 1013451 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0127 14:06:31.744904 1013451 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0127 14:06:31.744940 1013451 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0127 14:06:31.903974 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0127 14:06:31.904017 1013451 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0127 14:06:31.964569 1013451 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0127 14:06:31.964605 1013451 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0127 14:06:31.969199 1013451 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:06:31.969220 1013451 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 14:06:32.044200 1013451 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0127 14:06:32.044228 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0127 14:06:32.127179 1013451 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0127 14:06:32.127220 1013451 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0127 14:06:32.135626 1013451 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.333055604s)
	I0127 14:06:32.135659 1013451 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.320384321s)
	I0127 14:06:32.135752 1013451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:06:32.135838 1013451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 14:06:32.149940 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0127 14:06:32.149986 1013451 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0127 14:06:32.315159 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0127 14:06:32.343031 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0127 14:06:32.343069 1013451 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0127 14:06:32.360427 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:06:32.363253 1013451 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 14:06:32.363282 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0127 14:06:32.374156 1013451 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0127 14:06:32.374180 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0127 14:06:32.467818 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0127 14:06:32.467851 1013451 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0127 14:06:32.668364 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 14:06:32.710295 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0127 14:06:32.747185 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0127 14:06:32.747216 1013451 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0127 14:06:33.065468 1013451 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0127 14:06:33.065504 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0127 14:06:33.337642 1013451 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0127 14:06:33.337736 1013451 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0127 14:06:33.876528 1013451 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0127 14:06:33.876560 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0127 14:06:34.139997 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.907702721s)
	I0127 14:06:34.140087 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:34.140107 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:34.140458 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:34.140487 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:34.140506 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:34.140527 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:34.140800 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:34.140818 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:34.200127 1013451 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0127 14:06:34.200161 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0127 14:06:34.562411 1013451 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 14:06:34.562443 1013451 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0127 14:06:34.714298 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 14:06:36.621630 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.289747033s)
	I0127 14:06:36.621713 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:36.621733 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:36.621631 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.273802077s)
	I0127 14:06:36.621792 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:36.621810 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:36.622093 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:36.622103 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:36.622131 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:36.622142 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:36.622152 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:36.622153 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:36.622192 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:36.622208 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:36.622223 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:36.622252 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:36.622394 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:36.622422 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:36.622480 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:36.622510 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:36.622521 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:36.760227 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:36.760259 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:36.760715 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:36.760775 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:36.760796 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:37.753882 1013451 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0127 14:06:37.753936 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:37.757253 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:37.757684 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:37.757716 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:37.757878 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:37.758108 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:37.758286 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:37.758457 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:38.134471 1013451 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0127 14:06:38.320566 1013451 addons.go:238] Setting addon gcp-auth=true in "addons-097644"
	I0127 14:06:38.320644 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:38.321069 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:38.321130 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:38.336729 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0127 14:06:38.337259 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:38.337802 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:38.337830 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:38.338264 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:38.338744 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:38.338792 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:38.354738 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34493
	I0127 14:06:38.355352 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:38.355944 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:38.355968 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:38.356332 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:38.356545 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:38.358363 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:38.358617 1013451 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0127 14:06:38.358647 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:38.361268 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:38.361655 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:38.361682 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:38.361861 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:38.362040 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:38.362196 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:38.362330 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:39.535502 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.109096844s)
	I0127 14:06:39.535546 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.010420009s)
	I0127 14:06:39.535517 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.018902491s)
	I0127 14:06:39.535592 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.535581 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.535619 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.535628 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.535636 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.002450449s)
	I0127 14:06:39.535631 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.535671 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.535683 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.95968766s)
	I0127 14:06:39.535709 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.535724 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.535686 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.535756 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.840858115s)
	I0127 14:06:39.535612 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.535782 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.535791 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.535840 1013451 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.400060603s)
	I0127 14:06:39.535876 1013451 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.40001441s)
	I0127 14:06:39.535893 1013451 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0127 14:06:39.535966 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.220769901s)
	I0127 14:06:39.536002 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.536013 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.536138 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.175676841s)
	I0127 14:06:39.536161 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.536171 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.536302 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.867903313s)
	W0127 14:06:39.536330 1013451 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0127 14:06:39.536367 1013451 retry.go:31] will retry after 296.657665ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0127 14:06:39.536420 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.826074832s)
	I0127 14:06:39.536451 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.536464 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.536976 1013451 node_ready.go:35] waiting up to 6m0s for node "addons-097644" to be "Ready" ...
	I0127 14:06:39.538246 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538268 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538278 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538286 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538255 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538334 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538358 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538372 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538384 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538395 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538416 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538437 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538457 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538472 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538495 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538521 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538546 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538560 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538568 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538581 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538594 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538529 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538632 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538641 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538644 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538649 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538655 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538658 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538544 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538662 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538437 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538666 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538707 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538732 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538738 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538747 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538754 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538954 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538987 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538994 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538457 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.539033 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.539043 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.539051 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538631 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.539103 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.539111 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.539291 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.539323 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.539331 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.539465 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.539494 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.539501 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.540397 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.540437 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.540445 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.540507 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.540538 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.540545 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.540555 1013451 addons.go:479] Verifying addon metrics-server=true in "addons-097644"
	I0127 14:06:39.540638 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.540659 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.540664 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.540670 1013451 addons.go:479] Verifying addon ingress=true in "addons-097644"
	I0127 14:06:39.540826 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.540849 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.540856 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.540865 1013451 addons.go:479] Verifying addon registry=true in "addons-097644"
	I0127 14:06:39.541201 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.541235 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.541251 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.541333 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.541374 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.541381 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.543517 1013451 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-097644 service yakd-dashboard -n yakd-dashboard
	
	I0127 14:06:39.543527 1013451 out.go:177] * Verifying ingress addon...
	I0127 14:06:39.543529 1013451 out.go:177] * Verifying registry addon...
	I0127 14:06:39.545868 1013451 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0127 14:06:39.546062 1013451 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0127 14:06:39.551413 1013451 node_ready.go:49] node "addons-097644" has status "Ready":"True"
	I0127 14:06:39.551444 1013451 node_ready.go:38] duration metric: took 14.446121ms for node "addons-097644" to be "Ready" ...
	I0127 14:06:39.551456 1013451 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:06:39.591856 1013451 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0127 14:06:39.591887 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:39.591997 1013451 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0127 14:06:39.592022 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:39.604544 1013451 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:39.620217 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.620245 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.620663 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.620712 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.620733 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.833775 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 14:06:40.042238 1013451 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-097644" context rescaled to 1 replicas
	I0127 14:06:40.056864 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:40.057325 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:40.574204 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:40.574352 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:40.691503 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.977142989s)
	I0127 14:06:40.691571 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:40.691567 1013451 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.332922668s)
	I0127 14:06:40.691586 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:40.692022 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:40.692044 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:40.692055 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:40.692080 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:40.692356 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:40.692379 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:40.692393 1013451 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-097644"
	I0127 14:06:40.693820 1013451 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 14:06:40.693819 1013451 out.go:177] * Verifying csi-hostpath-driver addon...
	I0127 14:06:40.695829 1013451 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0127 14:06:40.696785 1013451 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0127 14:06:40.697165 1013451 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0127 14:06:40.697193 1013451 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0127 14:06:40.719430 1013451 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0127 14:06:40.719457 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:40.802113 1013451 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0127 14:06:40.802145 1013451 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0127 14:06:40.994953 1013451 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 14:06:40.995010 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0127 14:06:41.051371 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:41.055369 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:41.085073 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 14:06:41.212968 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:41.550636 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:41.551229 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:41.619011 1013451 pod_ready.go:103] pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:41.704620 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:42.054408 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:42.054655 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:42.202621 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:42.508558 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.674717249s)
	I0127 14:06:42.508636 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:42.508654 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:42.508962 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:42.508984 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:42.508994 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:42.509010 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:42.509270 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:42.509297 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:42.509297 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:42.550865 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:42.552139 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:42.700968 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:43.051426 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:43.051775 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:43.219737 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:43.654172 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:43.659020 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.573886282s)
	I0127 14:06:43.659089 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:43.659111 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:43.659423 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:43.659520 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:43.659535 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:43.659544 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:43.659496 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:43.659831 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:43.659850 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:43.661096 1013451 addons.go:479] Verifying addon gcp-auth=true in "addons-097644"
	I0127 14:06:43.662980 1013451 out.go:177] * Verifying gcp-auth addon...
	I0127 14:06:43.665443 1013451 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0127 14:06:43.667959 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:43.686297 1013451 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0127 14:06:43.686332 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:43.698333 1013451 pod_ready.go:103] pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:43.752116 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:44.051507 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:44.051642 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:44.169983 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:44.202197 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:44.550596 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:44.551695 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:44.669572 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:44.701465 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:45.051101 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:45.051498 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:45.168566 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:45.201519 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:45.551156 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:45.552669 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:45.675646 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:45.702063 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:46.052220 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:46.052234 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:46.112080 1013451 pod_ready.go:103] pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:46.168904 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:46.201719 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:46.551973 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:46.552112 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:46.668877 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:46.701725 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:47.050599 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:47.050979 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:47.169889 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:47.203312 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:47.550817 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:47.551169 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:47.668803 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:47.701344 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:48.053223 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:48.053534 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:48.120721 1013451 pod_ready.go:103] pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:48.172399 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:48.201255 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:48.552152 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:48.562421 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:48.670118 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:48.706743 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:49.056813 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:49.057202 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:49.175007 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:49.207070 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:49.552745 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:49.552809 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:49.670875 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:49.702320 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:50.051877 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:50.052248 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:50.168779 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:50.202479 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:50.551892 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:50.552457 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:50.615652 1013451 pod_ready.go:93] pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:50.615678 1013451 pod_ready.go:82] duration metric: took 11.011100516s for pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.615689 1013451 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-f5h88" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.627270 1013451 pod_ready.go:93] pod "coredns-668d6bf9bc-f5h88" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:50.627306 1013451 pod_ready.go:82] duration metric: took 11.610993ms for pod "coredns-668d6bf9bc-f5h88" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.627316 1013451 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-xk7kv" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.632345 1013451 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-xk7kv" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-xk7kv" not found
	I0127 14:06:50.632372 1013451 pod_ready.go:82] duration metric: took 5.049964ms for pod "coredns-668d6bf9bc-xk7kv" in "kube-system" namespace to be "Ready" ...
	E0127 14:06:50.632383 1013451 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-xk7kv" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-xk7kv" not found
	I0127 14:06:50.632390 1013451 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.637099 1013451 pod_ready.go:93] pod "etcd-addons-097644" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:50.637119 1013451 pod_ready.go:82] duration metric: took 4.724126ms for pod "etcd-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.637128 1013451 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.641577 1013451 pod_ready.go:93] pod "kube-apiserver-addons-097644" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:50.641597 1013451 pod_ready.go:82] duration metric: took 4.462666ms for pod "kube-apiserver-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.641605 1013451 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.669462 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:50.706029 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:50.809340 1013451 pod_ready.go:93] pod "kube-controller-manager-addons-097644" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:50.809365 1013451 pod_ready.go:82] duration metric: took 167.752957ms for pod "kube-controller-manager-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.809377 1013451 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f4zwd" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:51.050450 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:51.051944 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:51.170085 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:51.202947 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:51.208582 1013451 pod_ready.go:93] pod "kube-proxy-f4zwd" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:51.208606 1013451 pod_ready.go:82] duration metric: took 399.222781ms for pod "kube-proxy-f4zwd" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:51.208616 1013451 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:51.551263 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:51.551705 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:51.608807 1013451 pod_ready.go:93] pod "kube-scheduler-addons-097644" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:51.608840 1013451 pod_ready.go:82] duration metric: took 400.21695ms for pod "kube-scheduler-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:51.608854 1013451 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:51.670471 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:51.701367 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:52.050707 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:52.050834 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:52.169284 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:52.200658 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:52.550340 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:52.551185 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:52.668895 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:52.702017 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:53.057413 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:53.057641 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:53.169648 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:53.202006 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:53.550241 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:53.550722 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:53.620587 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:53.669530 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:53.701719 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:54.052792 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:54.053279 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:54.169476 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:54.201306 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:54.551907 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:54.552638 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:54.669077 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:54.701764 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:55.100240 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:55.100296 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:55.182070 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:55.201395 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:55.551761 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:55.551927 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:55.668933 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:55.701923 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:56.050536 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:56.050982 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:56.119811 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:56.168904 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:56.202072 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:56.551874 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:56.552481 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:56.669587 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:56.701617 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:57.050231 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:57.050613 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:57.170169 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:57.201972 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:57.551609 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:57.551795 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:57.670084 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:57.702058 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:58.383183 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:58.383399 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:58.384179 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:58.384242 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:58.387592 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:58.550466 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:58.550887 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:58.668764 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:58.701776 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:59.050306 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:59.050697 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:59.169436 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:59.204311 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:59.560946 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:59.560967 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:59.670919 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:59.702414 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:00.468343 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:00.468634 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:00.469971 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:00.470230 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:00.475121 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:07:00.551178 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:00.552210 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:00.670053 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:00.702754 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:01.051143 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:01.051753 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:01.169521 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:01.202017 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:01.550952 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:01.551011 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:01.669355 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:01.701492 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:02.054133 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:02.054531 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:02.169554 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:02.201828 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:02.553190 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:02.553417 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:02.616135 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:07:02.669251 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:02.702653 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:03.051556 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:03.052058 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:03.168688 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:03.206615 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:03.552205 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:03.552324 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:03.670459 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:03.705277 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:04.050893 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:04.051564 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:04.169123 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:04.271611 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:04.550873 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:04.551002 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:04.618165 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:07:04.669774 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:04.701982 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:05.050574 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:05.050984 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:05.168730 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:05.201868 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:05.550374 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:05.550418 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:05.668407 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:05.701325 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:06.050944 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:06.051773 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:06.169027 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:06.201826 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:06.550446 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:06.551065 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:07.011171 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:07.012800 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:07:07.014528 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:07.051263 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:07.052394 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:07.168896 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:07.202772 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:07.552036 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:07.552265 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:07.669494 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:07.701789 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:08.050016 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:08.050930 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:08.169153 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:08.201129 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:08.552701 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:08.554461 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:08.669806 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:08.702780 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:09.051527 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:09.051791 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:09.115325 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:07:09.169334 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:09.201659 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:09.550572 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:09.550938 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:09.668878 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:09.701776 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:10.051782 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:10.052645 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:10.168877 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:10.201786 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:10.551300 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:10.551673 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:10.669403 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:10.700959 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:11.051149 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:11.051672 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:11.115643 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:07:11.169733 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:11.202417 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:11.552212 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:11.552243 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:11.671629 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:11.701802 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:12.051799 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:12.054435 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:12.170154 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:12.203930 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:12.557266 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:12.557520 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:12.625739 1013451 pod_ready.go:93] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"True"
	I0127 14:07:12.625769 1013451 pod_ready.go:82] duration metric: took 21.016907428s for pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace to be "Ready" ...
	I0127 14:07:12.625780 1013451 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-bs6d4" in "kube-system" namespace to be "Ready" ...
	I0127 14:07:12.635943 1013451 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-bs6d4" in "kube-system" namespace has status "Ready":"True"
	I0127 14:07:12.635969 1013451 pod_ready.go:82] duration metric: took 10.183333ms for pod "nvidia-device-plugin-daemonset-bs6d4" in "kube-system" namespace to be "Ready" ...
	I0127 14:07:12.635988 1013451 pod_ready.go:39] duration metric: took 33.08451816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:07:12.636039 1013451 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:07:12.636109 1013451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:07:12.671346 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:12.688681 1013451 api_server.go:72] duration metric: took 41.886073676s to wait for apiserver process to appear ...
	I0127 14:07:12.688712 1013451 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:07:12.688736 1013451 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 14:07:12.701264 1013451 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
	ok
	I0127 14:07:12.702757 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:12.703236 1013451 api_server.go:141] control plane version: v1.32.1
	I0127 14:07:12.703267 1013451 api_server.go:131] duration metric: took 14.546167ms to wait for apiserver health ...
	I0127 14:07:12.703280 1013451 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:07:12.717932 1013451 system_pods.go:59] 18 kube-system pods found
	I0127 14:07:12.717976 1013451 system_pods.go:61] "amd-gpu-device-plugin-89xv2" [7b98e34d-687f-47aa-8a1f-b8c5c016e93e] Running
	I0127 14:07:12.717984 1013451 system_pods.go:61] "coredns-668d6bf9bc-f5h88" [f45297c4-5f83-45a6-9f30-d0b16d29ef1d] Running
	I0127 14:07:12.717995 1013451 system_pods.go:61] "csi-hostpath-attacher-0" [0e65ff6e-fdeb-4e47-a281-58d2846521dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0127 14:07:12.718012 1013451 system_pods.go:61] "csi-hostpath-resizer-0" [f4b69299-7108-4d71-a19f-c8640d4d9d7b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0127 14:07:12.718024 1013451 system_pods.go:61] "csi-hostpathplugin-8jql5" [cdb87938-f761-462d-aaf8-e4a74f0d8e7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 14:07:12.718035 1013451 system_pods.go:61] "etcd-addons-097644" [15355068-d7bd-4c15-8402-670f796142e0] Running
	I0127 14:07:12.718043 1013451 system_pods.go:61] "kube-apiserver-addons-097644" [3bf8c5a4-9f46-4a38-8c40-03e649c1865a] Running
	I0127 14:07:12.718050 1013451 system_pods.go:61] "kube-controller-manager-addons-097644" [b91db1d0-e6e1-40f4-a230-9496ded8dfbc] Running
	I0127 14:07:12.718057 1013451 system_pods.go:61] "kube-ingress-dns-minikube" [f4e9fbe7-9f01-42c9-abd2-70a375dbf64b] Running
	I0127 14:07:12.718063 1013451 system_pods.go:61] "kube-proxy-f4zwd" [35fadf52-7154-403a-9e7c-d6efebab978e] Running
	I0127 14:07:12.718070 1013451 system_pods.go:61] "kube-scheduler-addons-097644" [64c5112b-77bd-466f-a1ed-e8f2c6512297] Running
	I0127 14:07:12.718076 1013451 system_pods.go:61] "metrics-server-7fbb699795-dr2kc" [d5f1b090-54ae-4efb-ade0-56f8442d821c] Running
	I0127 14:07:12.718082 1013451 system_pods.go:61] "nvidia-device-plugin-daemonset-bs6d4" [157addb8-6c2f-41d6-9d57-8ff984241b50] Running
	I0127 14:07:12.718088 1013451 system_pods.go:61] "registry-6c88467877-gs69t" [56ae8219-917b-43a3-8b3a-9965b018d7ae] Running
	I0127 14:07:12.718096 1013451 system_pods.go:61] "registry-proxy-68qft" [fcd36f1c-2ee6-49df-985c-78afd0b91e4b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0127 14:07:12.718107 1013451 system_pods.go:61] "snapshot-controller-68b874b76f-bncpk" [b196166f-4021-4337-a63b-54cb610bac71] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 14:07:12.718120 1013451 system_pods.go:61] "snapshot-controller-68b874b76f-pqf9k" [1173dcb4-3cf3-44b8-ae6f-7c755536337d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 14:07:12.718127 1013451 system_pods.go:61] "storage-provisioner" [d68777c6-5f9e-44a6-b8cb-1c9f8ee105cf] Running
	I0127 14:07:12.718139 1013451 system_pods.go:74] duration metric: took 14.846764ms to wait for pod list to return data ...
	I0127 14:07:12.718153 1013451 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:07:12.721126 1013451 default_sa.go:45] found service account: "default"
	I0127 14:07:12.721157 1013451 default_sa.go:55] duration metric: took 2.993622ms for default service account to be created ...
	I0127 14:07:12.721171 1013451 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 14:07:12.728179 1013451 system_pods.go:87] 18 kube-system pods found
	I0127 14:07:12.730708 1013451 system_pods.go:105] "amd-gpu-device-plugin-89xv2" [7b98e34d-687f-47aa-8a1f-b8c5c016e93e] Running
	I0127 14:07:12.730727 1013451 system_pods.go:105] "coredns-668d6bf9bc-f5h88" [f45297c4-5f83-45a6-9f30-d0b16d29ef1d] Running
	I0127 14:07:12.730738 1013451 system_pods.go:105] "csi-hostpath-attacher-0" [0e65ff6e-fdeb-4e47-a281-58d2846521dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0127 14:07:12.730748 1013451 system_pods.go:105] "csi-hostpath-resizer-0" [f4b69299-7108-4d71-a19f-c8640d4d9d7b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0127 14:07:12.730761 1013451 system_pods.go:105] "csi-hostpathplugin-8jql5" [cdb87938-f761-462d-aaf8-e4a74f0d8e7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 14:07:12.730773 1013451 system_pods.go:105] "etcd-addons-097644" [15355068-d7bd-4c15-8402-670f796142e0] Running
	I0127 14:07:12.730781 1013451 system_pods.go:105] "kube-apiserver-addons-097644" [3bf8c5a4-9f46-4a38-8c40-03e649c1865a] Running
	I0127 14:07:12.730787 1013451 system_pods.go:105] "kube-controller-manager-addons-097644" [b91db1d0-e6e1-40f4-a230-9496ded8dfbc] Running
	I0127 14:07:12.730794 1013451 system_pods.go:105] "kube-ingress-dns-minikube" [f4e9fbe7-9f01-42c9-abd2-70a375dbf64b] Running
	I0127 14:07:12.730798 1013451 system_pods.go:105] "kube-proxy-f4zwd" [35fadf52-7154-403a-9e7c-d6efebab978e] Running
	I0127 14:07:12.730802 1013451 system_pods.go:105] "kube-scheduler-addons-097644" [64c5112b-77bd-466f-a1ed-e8f2c6512297] Running
	I0127 14:07:12.730806 1013451 system_pods.go:105] "metrics-server-7fbb699795-dr2kc" [d5f1b090-54ae-4efb-ade0-56f8442d821c] Running
	I0127 14:07:12.730811 1013451 system_pods.go:105] "nvidia-device-plugin-daemonset-bs6d4" [157addb8-6c2f-41d6-9d57-8ff984241b50] Running
	I0127 14:07:12.730815 1013451 system_pods.go:105] "registry-6c88467877-gs69t" [56ae8219-917b-43a3-8b3a-9965b018d7ae] Running
	I0127 14:07:12.730821 1013451 system_pods.go:105] "registry-proxy-68qft" [fcd36f1c-2ee6-49df-985c-78afd0b91e4b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0127 14:07:12.730828 1013451 system_pods.go:105] "snapshot-controller-68b874b76f-bncpk" [b196166f-4021-4337-a63b-54cb610bac71] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 14:07:12.730836 1013451 system_pods.go:105] "snapshot-controller-68b874b76f-pqf9k" [1173dcb4-3cf3-44b8-ae6f-7c755536337d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 14:07:12.730843 1013451 system_pods.go:105] "storage-provisioner" [d68777c6-5f9e-44a6-b8cb-1c9f8ee105cf] Running
	I0127 14:07:12.730852 1013451 system_pods.go:147] duration metric: took 9.674182ms to wait for k8s-apps to be running ...
	I0127 14:07:12.730866 1013451 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 14:07:12.730919 1013451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:07:12.776597 1013451 system_svc.go:56] duration metric: took 45.717863ms WaitForService to wait for kubelet
	I0127 14:07:12.776634 1013451 kubeadm.go:582] duration metric: took 41.974036194s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:07:12.776668 1013451 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:07:12.779895 1013451 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 14:07:12.779925 1013451 node_conditions.go:123] node cpu capacity is 2
	I0127 14:07:12.779937 1013451 node_conditions.go:105] duration metric: took 3.263578ms to run NodePressure ...
	I0127 14:07:12.779949 1013451 start.go:241] waiting for startup goroutines ...
	I0127 14:07:13.051978 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:13.052021 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:13.185783 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:13.206287 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:13.550709 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:13.551235 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:13.669317 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:13.701284 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:14.050846 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:14.051195 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:14.168756 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:14.202094 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:14.550255 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:14.551602 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:14.669317 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:14.701627 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:15.053046 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:15.053769 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:15.170995 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:15.203340 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:15.550746 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:15.551289 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:15.669797 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:15.702168 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:16.050144 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:16.050517 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:16.169356 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:16.201683 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:16.550953 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:16.551195 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:16.669784 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:16.702119 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:17.051144 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:17.051141 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:17.468098 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:17.469892 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:17.551344 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:17.551464 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:17.669038 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:17.702218 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:18.051797 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:18.052165 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:18.169400 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:18.202195 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:18.551843 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:18.552250 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:18.668610 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:18.701555 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:19.050623 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:19.051183 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:19.170878 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:19.201626 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:19.563323 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:19.565912 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:19.668974 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:19.702334 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:20.051931 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:20.052068 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:20.169838 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:20.201669 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:20.551529 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:20.551698 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:20.669152 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:20.701960 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:21.051433 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:21.051582 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:21.169879 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:21.201792 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:21.551317 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:21.551547 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:21.669135 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:21.701862 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:22.050599 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:22.050786 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:22.169800 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:22.201820 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:22.549984 1013451 kapi.go:107] duration metric: took 43.003916156s to wait for kubernetes.io/minikube-addons=registry ...
	I0127 14:07:22.550678 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:22.670404 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:22.701421 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:23.051144 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:23.169833 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:23.201769 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:23.550570 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:23.669457 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:23.701823 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:24.050614 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:24.169635 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:24.201972 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:24.549864 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:24.850060 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:24.850512 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:25.051285 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:25.168488 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:25.202049 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:25.550619 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:25.669472 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:25.701812 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:26.050499 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:26.169201 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:26.201034 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:26.550623 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:26.669459 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:26.702346 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:27.051287 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:27.169129 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:27.201158 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:27.551107 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:27.670129 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:27.702139 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:28.050633 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:28.169514 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:28.201745 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:28.549622 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:28.669711 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:28.701840 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:29.049926 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:29.169680 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:29.202737 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:29.550738 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:29.669967 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:29.701832 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:30.051104 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:30.169470 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:30.202270 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:30.550200 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:30.669788 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:30.701729 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:31.050315 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:31.169180 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:31.202245 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:31.550908 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:31.669616 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:31.701623 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:32.049918 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:32.169923 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:32.202237 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:32.550701 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:32.669164 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:32.701141 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:33.050480 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:33.168992 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:33.202153 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:33.550701 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:33.669874 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:33.702366 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:34.050511 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:34.169277 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:34.201418 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:34.550643 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:34.669531 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:34.701256 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:35.054928 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:35.169647 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:35.201868 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:35.549900 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:35.669754 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:35.701752 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:36.050017 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:36.169892 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:36.204020 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:36.551071 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:36.669899 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:36.701717 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:37.050081 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:37.169825 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:37.202223 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:37.550847 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:37.669530 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:37.701678 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:38.050063 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:38.169923 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:38.202463 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:38.549773 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:38.669659 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:38.701996 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:39.050495 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:39.169641 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:39.201887 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:39.550593 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:39.670566 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:39.702072 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:40.050380 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:40.169307 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:40.201420 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:40.550999 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:40.669715 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:40.701440 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:41.050230 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:41.168879 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:41.202325 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:41.550624 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:41.669747 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:41.701809 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:42.050493 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:42.169211 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:42.201520 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:42.550682 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:42.669305 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:42.701468 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:43.050555 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:43.169709 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:43.201742 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:43.550616 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:43.669985 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:43.702199 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:44.050462 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:44.168863 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:44.201969 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:44.550657 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:44.669862 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:44.702322 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:45.051337 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:45.169209 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:45.202025 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:45.550160 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:45.668972 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:45.701927 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:46.050307 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:46.168971 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:46.202059 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:46.551128 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:46.668578 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:46.702834 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:47.050852 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:47.169959 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:47.202008 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:47.551425 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:47.669309 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:47.701110 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:48.051016 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:48.169525 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:48.201587 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:48.550480 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:48.669034 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:48.702415 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:49.050601 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:49.168823 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:49.201585 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:49.550210 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:49.669046 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:49.701888 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:50.050296 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:50.169631 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:50.201503 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:50.551501 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:50.669281 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:50.702511 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:51.050900 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:51.169612 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:51.201816 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:51.552111 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:51.671918 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:51.702548 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:52.050260 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:52.168832 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:52.202188 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:52.550695 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:52.669650 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:52.702333 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:53.052245 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:53.169200 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:53.201611 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:53.550672 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:53.669444 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:53.701777 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:54.051130 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:54.168868 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:54.202046 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:54.550431 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:54.669306 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:54.701904 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:55.051015 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:55.170280 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:55.201214 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:55.553236 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:55.668853 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:55.702340 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:56.051092 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:56.169953 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:56.202452 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:56.551212 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:56.668750 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:56.702523 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:57.050964 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:57.169807 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:57.201803 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:57.550211 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:57.668876 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:57.707900 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:58.050191 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:58.168681 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:58.202039 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:58.550833 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:58.669610 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:58.701767 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:59.051468 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:59.169107 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:59.202715 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:59.551047 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:59.670592 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:59.701979 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:00.050778 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:00.169383 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:00.201834 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:00.551100 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:00.669963 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:00.771411 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:01.054273 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:01.169271 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:01.201602 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:01.550680 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:01.669283 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:01.701522 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:02.052977 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:02.169224 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:02.202291 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:02.550191 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:02.669159 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:02.701813 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:03.049670 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:03.198193 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:03.213735 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:03.551488 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:03.669126 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:03.704574 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:04.050148 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:04.169130 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:04.200961 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:04.550132 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:04.684815 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:04.702791 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:05.177951 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:05.178289 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:05.204849 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:05.551607 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:05.670725 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:05.708916 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:06.050874 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:06.172293 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:06.201971 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:06.551280 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:06.669334 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:06.701067 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:07.051436 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:07.169708 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:07.202011 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:07.552925 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:07.668863 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:07.701641 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:08.050688 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:08.168959 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:08.202195 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:08.550600 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:08.668882 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:08.702599 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:09.051177 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:09.168919 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:09.203167 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:09.550992 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:09.669419 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:09.701472 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:10.051368 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:10.169506 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:10.201966 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:10.923307 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:10.927584 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:10.927913 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:11.050639 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:11.170106 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:11.272444 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:11.552898 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:11.669527 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:11.701595 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:12.050322 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:12.168886 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:12.201829 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:12.550464 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:12.669150 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:12.771687 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:13.050505 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:13.169760 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:13.204975 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:13.551502 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:13.669335 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:13.701321 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:14.050505 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:14.170895 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:14.209305 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:14.550917 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:14.670374 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:14.703360 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:15.056811 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:15.170547 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:15.201903 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:15.551103 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:15.669672 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:15.701742 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:16.051467 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:16.169954 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:16.203694 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:16.551142 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:16.669768 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:16.702805 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:17.051501 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:17.169205 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:17.202951 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:17.551252 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:17.668660 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:17.701825 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:18.051434 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:18.171325 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:18.203909 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:18.551201 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:18.670054 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:18.702443 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:19.050156 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:19.468641 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:19.469516 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:19.550943 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:19.669264 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:19.759545 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:20.058136 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:20.170948 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:20.203636 1013451 kapi.go:107] duration metric: took 1m39.506848143s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0127 14:08:20.550335 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:20.668839 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:21.051466 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:21.169190 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:21.550095 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:21.668827 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:22.051580 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:22.169470 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:22.550664 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:22.669514 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:23.051018 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:23.169957 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:23.550439 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:23.669931 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:24.053965 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:24.169878 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:24.550387 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:24.669803 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:25.056975 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:25.172567 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:25.551153 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:25.670581 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:26.051385 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:26.169530 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:26.551217 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:26.669338 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:27.050638 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:27.170170 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:27.550781 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:27.669538 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:28.051621 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:28.169483 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:28.550676 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:28.669440 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:29.050516 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:29.169375 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:29.551751 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:29.669212 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:30.050939 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:30.169393 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:30.550455 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:30.669253 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:31.050996 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:31.170070 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:31.550206 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:31.668763 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:32.051626 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:32.169320 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:32.551069 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:32.669837 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:33.050330 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:33.168620 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:33.550910 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:33.670232 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:34.051832 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:34.169178 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:34.550237 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:34.668760 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:35.051600 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:35.168763 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:35.551988 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:35.669108 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:36.051060 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:36.170390 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:36.550794 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:36.670426 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:37.050690 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:37.169249 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:37.550576 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:37.669601 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:38.051570 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:38.169093 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:38.550515 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:38.669589 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:39.050556 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:39.169165 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:39.549996 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:39.669744 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:40.051936 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:40.169233 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:40.551315 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:40.669719 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:41.051496 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:41.169933 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:41.550270 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:41.669462 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:42.051430 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:42.169435 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:42.550648 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:42.669559 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:43.051075 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:43.170173 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:43.550411 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:43.669019 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:44.051147 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:44.169943 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:44.550616 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:44.669541 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:45.051936 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:45.169481 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:45.551946 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:45.669610 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:46.051573 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:46.169440 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:46.551239 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:46.669157 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:47.050473 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:47.169232 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:47.550542 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:47.669197 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:48.050628 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:48.169232 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:48.550646 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:48.669371 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:49.050350 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:49.168809 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:49.552159 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:49.668741 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:50.096074 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:50.194902 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:50.551924 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:50.669444 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:51.051559 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:51.169244 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:51.550779 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:51.669835 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:52.051039 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:52.170723 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:52.551544 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:52.669556 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:53.050634 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:53.169497 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:53.551283 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:53.670037 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:54.051147 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:54.170233 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:54.550184 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:54.669816 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:55.051429 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:55.169212 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:55.550803 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:55.668993 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:56.050841 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:56.169885 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:56.550306 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:56.670189 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:57.050387 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:57.170258 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:57.551101 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:57.669797 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:58.051185 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:58.170985 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:58.550560 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:58.676095 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:59.051442 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:59.169894 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:59.551564 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:59.670164 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:00.050493 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:09:00.170055 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:00.581252 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:09:00.780484 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:01.055777 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:09:01.174697 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:01.552096 1013451 kapi.go:107] duration metric: took 2m22.006221923s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0127 14:09:01.671070 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:02.169799 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:02.683707 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:03.169279 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:03.670330 1013451 kapi.go:107] duration metric: took 2m20.004881029s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0127 14:09:03.672423 1013451 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-097644 cluster.
	I0127 14:09:03.673752 1013451 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0127 14:09:03.675214 1013451 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0127 14:09:03.676891 1013451 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner-rancher, nvidia-device-plugin, amd-gpu-device-plugin, metrics-server, storage-provisioner, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0127 14:09:03.678180 1013451 addons.go:514] duration metric: took 2m32.875560916s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner-rancher nvidia-device-plugin amd-gpu-device-plugin metrics-server storage-provisioner inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0127 14:09:03.678236 1013451 start.go:246] waiting for cluster config update ...
	I0127 14:09:03.678259 1013451 start.go:255] writing updated cluster config ...
	I0127 14:09:03.678549 1013451 ssh_runner.go:195] Run: rm -f paused
	I0127 14:09:03.733995 1013451 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 14:09:03.735875 1013451 out.go:177] * Done! kubectl is now configured to use "addons-097644" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.441502436Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987495441471168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=714fc9b8-c9de-4250-81a7-894891d0ed09 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.442278071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44340a47-4711-4b77-a072-f67a710c17e5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.442370624Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44340a47-4711-4b77-a072-f67a710c17e5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.442770874Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f81789dc13409c9c89abd809f5b52e2d832e26f91690d41ab03d14e0cc9d3e,PodSandboxId:5352d026f28eb9460b4dd0d0f512e27b3dc8581a39e7da219da2c6c2176d013e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737986947877151509,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0467110-2a34-4ee9-a43d-ff359ed55ac8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31c99b76a81dc6389be76b64f3ef0313d32e7b67b73abfe9d18279163fd6b43a,PodSandboxId:4b29b0e07759182fe6e91d2c67c8a6f765579226662ff94442fc003bd9b459b8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737986940974818963,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-nz5zf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8a084d-8bfd-4fe1-af66-150effc1def4,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.p
orts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:58904f506013f542e9b231a6808678fd0573c3bd8cd952423087c0d6173cf906,PodSandboxId:331461d468a0287af39cdd2959fd770331d81d0a988e0b06742c072ed08529de,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416
951eb,State:CONTAINER_EXITED,CreatedAt:1737986885438608437,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-bzwfx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 299a7a11-a7cd-47e8-a6e3-3bad249287e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9f1bf88ae46d82528c0e29a5bc57c290c29e5ead7bc5458611dfb34f3f9396,PodSandboxId:2b7094e2898b69c6870171d7731d4bbe785d99322137dfbdda62bbf372258d31,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f
6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986883193453747,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k6p8j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3601244e-0ff0-48fd-82e5-191697a50cc1,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:623ee8fa394741d20631eadce0645de0c3077b41bda84ba16e69298ce2fb2aee,PodSandboxId:0a6270a918122fceb54c8234f272b5923906b77f97d500716b438d261d65f1b5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737986810137964591,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-89xv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b98e34d-687f-47aa-8a1f-b8c5c016e93e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05863be1b9fa2047744ae57909fd013effd20461b13a53c0203548ae5163cc17,PodSandboxId:966718e37de57247bbe1778e18c2315e360be73508d87c193cbf3d064b76c59a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737986808183321140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e9fbe7-9f01-42c9-abd2-70a375dbf64b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c8ab68a095e3f9a19ef0816cd7f0039760858f3eb4b4e8be7a466c8a3b5f2,PodSandboxId:a26522c3d4205d02491ecf32bd122e8d144ae7b6569272f6f8ee59695baccbf1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Ima
ge:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737986800989304931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68777c6-5f9e-44a6-b8cb-1c9f8ee105cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c916e18de1c73a60943849ae15c67f7ee646da8cba2b577f0bf51585b989079,PodSandboxId:548cc3bbe430b8930513f7f2f476e7c848cffcdece72034c5d9775b7f9bc1f65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af
631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737986794505629258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f5h88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f45297c4-5f83-45a6-9f30-d0b16d29ef1d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90efac6917c68038d63a14118f3bb156f497e40e9ad1
7243eb081ca38d088f5,PodSandboxId:8b4984c018663d7e6ca920385de46f500231a3ebad824f063ff512bea4ada26e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737986790983330349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fadf52-7154-403a-9e7c-d6efebab978e,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e0a45028148fd4e8391f8ec068ea1dc26781777b65595f0e77d40e5887b183,PodSandboxId:
a8b62c040eb6fb2cb43d0aeb7e74ef69d37550776f341a2390e7c11b9d2376bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737986780282577344,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1baee3c5773302dcbf66e88a017664,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:507cc4bfd4bacc7171e8b29ab1c7cf6755a54cd4e33644a7e010521389c99e19,PodSandboxId:eb6ed8d17f58c3039433b49564b8b5f4564167c636d90e8
870ec9ddacd177da7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737986780215195274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e884d025cc58fee39959cedced04f6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97beecbf34e8cd79fb655f7ac780b4687619a6508d22de1fae488cf0049312,PodSandboxId:0c77accc1a4c11bee5b61d5981f6c394963573cda5b31141bd2aae673803805a
,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737986780205970791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1bdeb768a2f70bc6bfa23146163e88,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:726cfe5819ce466e645fc14abb234442552bd6186cbb1f21a5e46f21de73c975,PodSandboxId:37576819d5068e04b4ea5a46a934bdae664ba9dce2b576c4f6549ea
7360cf9f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737986780233143816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac465cf17a7fdee35cc7d0daf7e27a45,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44340a47-4711-4b77-a072-f67a710c17e5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.491509421Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa73e888-f722-4e06-a06b-bae7bcc8f79e name=/runtime.v1.RuntimeService/Version
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.491620725Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa73e888-f722-4e06-a06b-bae7bcc8f79e name=/runtime.v1.RuntimeService/Version
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.492782843Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3053469-fc99-44dc-95e1-99fc99ecca46 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.494330839Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987495494302353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3053469-fc99-44dc-95e1-99fc99ecca46 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.495140859Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1889edd-aef4-4bb8-bcfe-8b21097e7167 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.495203732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1889edd-aef4-4bb8-bcfe-8b21097e7167 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.495471535Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f81789dc13409c9c89abd809f5b52e2d832e26f91690d41ab03d14e0cc9d3e,PodSandboxId:5352d026f28eb9460b4dd0d0f512e27b3dc8581a39e7da219da2c6c2176d013e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737986947877151509,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0467110-2a34-4ee9-a43d-ff359ed55ac8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31c99b76a81dc6389be76b64f3ef0313d32e7b67b73abfe9d18279163fd6b43a,PodSandboxId:4b29b0e07759182fe6e91d2c67c8a6f765579226662ff94442fc003bd9b459b8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737986940974818963,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-nz5zf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8a084d-8bfd-4fe1-af66-150effc1def4,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.p
orts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:58904f506013f542e9b231a6808678fd0573c3bd8cd952423087c0d6173cf906,PodSandboxId:331461d468a0287af39cdd2959fd770331d81d0a988e0b06742c072ed08529de,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416
951eb,State:CONTAINER_EXITED,CreatedAt:1737986885438608437,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-bzwfx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 299a7a11-a7cd-47e8-a6e3-3bad249287e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9f1bf88ae46d82528c0e29a5bc57c290c29e5ead7bc5458611dfb34f3f9396,PodSandboxId:2b7094e2898b69c6870171d7731d4bbe785d99322137dfbdda62bbf372258d31,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f
6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986883193453747,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k6p8j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3601244e-0ff0-48fd-82e5-191697a50cc1,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:623ee8fa394741d20631eadce0645de0c3077b41bda84ba16e69298ce2fb2aee,PodSandboxId:0a6270a918122fceb54c8234f272b5923906b77f97d500716b438d261d65f1b5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737986810137964591,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-89xv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b98e34d-687f-47aa-8a1f-b8c5c016e93e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05863be1b9fa2047744ae57909fd013effd20461b13a53c0203548ae5163cc17,PodSandboxId:966718e37de57247bbe1778e18c2315e360be73508d87c193cbf3d064b76c59a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737986808183321140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e9fbe7-9f01-42c9-abd2-70a375dbf64b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c8ab68a095e3f9a19ef0816cd7f0039760858f3eb4b4e8be7a466c8a3b5f2,PodSandboxId:a26522c3d4205d02491ecf32bd122e8d144ae7b6569272f6f8ee59695baccbf1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Ima
ge:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737986800989304931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68777c6-5f9e-44a6-b8cb-1c9f8ee105cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c916e18de1c73a60943849ae15c67f7ee646da8cba2b577f0bf51585b989079,PodSandboxId:548cc3bbe430b8930513f7f2f476e7c848cffcdece72034c5d9775b7f9bc1f65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af
631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737986794505629258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f5h88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f45297c4-5f83-45a6-9f30-d0b16d29ef1d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90efac6917c68038d63a14118f3bb156f497e40e9ad1
7243eb081ca38d088f5,PodSandboxId:8b4984c018663d7e6ca920385de46f500231a3ebad824f063ff512bea4ada26e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737986790983330349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fadf52-7154-403a-9e7c-d6efebab978e,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e0a45028148fd4e8391f8ec068ea1dc26781777b65595f0e77d40e5887b183,PodSandboxId:
a8b62c040eb6fb2cb43d0aeb7e74ef69d37550776f341a2390e7c11b9d2376bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737986780282577344,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1baee3c5773302dcbf66e88a017664,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:507cc4bfd4bacc7171e8b29ab1c7cf6755a54cd4e33644a7e010521389c99e19,PodSandboxId:eb6ed8d17f58c3039433b49564b8b5f4564167c636d90e8
870ec9ddacd177da7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737986780215195274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e884d025cc58fee39959cedced04f6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97beecbf34e8cd79fb655f7ac780b4687619a6508d22de1fae488cf0049312,PodSandboxId:0c77accc1a4c11bee5b61d5981f6c394963573cda5b31141bd2aae673803805a
,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737986780205970791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1bdeb768a2f70bc6bfa23146163e88,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:726cfe5819ce466e645fc14abb234442552bd6186cbb1f21a5e46f21de73c975,PodSandboxId:37576819d5068e04b4ea5a46a934bdae664ba9dce2b576c4f6549ea
7360cf9f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737986780233143816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac465cf17a7fdee35cc7d0daf7e27a45,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1889edd-aef4-4bb8-bcfe-8b21097e7167 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.534095415Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26815249-223d-49ba-a282-ce8d64caf479 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.534197068Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26815249-223d-49ba-a282-ce8d64caf479 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.535469126Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=62d6c567-38b6-4175-9503-295bbf001353 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.536750874Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987495536717225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62d6c567-38b6-4175-9503-295bbf001353 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.537515846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33b7fa07-581e-4152-8956-17a321731853 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.537598093Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33b7fa07-581e-4152-8956-17a321731853 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.538003165Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f81789dc13409c9c89abd809f5b52e2d832e26f91690d41ab03d14e0cc9d3e,PodSandboxId:5352d026f28eb9460b4dd0d0f512e27b3dc8581a39e7da219da2c6c2176d013e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737986947877151509,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0467110-2a34-4ee9-a43d-ff359ed55ac8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31c99b76a81dc6389be76b64f3ef0313d32e7b67b73abfe9d18279163fd6b43a,PodSandboxId:4b29b0e07759182fe6e91d2c67c8a6f765579226662ff94442fc003bd9b459b8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737986940974818963,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-nz5zf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8a084d-8bfd-4fe1-af66-150effc1def4,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.p
orts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:58904f506013f542e9b231a6808678fd0573c3bd8cd952423087c0d6173cf906,PodSandboxId:331461d468a0287af39cdd2959fd770331d81d0a988e0b06742c072ed08529de,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416
951eb,State:CONTAINER_EXITED,CreatedAt:1737986885438608437,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-bzwfx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 299a7a11-a7cd-47e8-a6e3-3bad249287e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9f1bf88ae46d82528c0e29a5bc57c290c29e5ead7bc5458611dfb34f3f9396,PodSandboxId:2b7094e2898b69c6870171d7731d4bbe785d99322137dfbdda62bbf372258d31,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f
6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986883193453747,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k6p8j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3601244e-0ff0-48fd-82e5-191697a50cc1,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:623ee8fa394741d20631eadce0645de0c3077b41bda84ba16e69298ce2fb2aee,PodSandboxId:0a6270a918122fceb54c8234f272b5923906b77f97d500716b438d261d65f1b5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737986810137964591,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-89xv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b98e34d-687f-47aa-8a1f-b8c5c016e93e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05863be1b9fa2047744ae57909fd013effd20461b13a53c0203548ae5163cc17,PodSandboxId:966718e37de57247bbe1778e18c2315e360be73508d87c193cbf3d064b76c59a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737986808183321140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e9fbe7-9f01-42c9-abd2-70a375dbf64b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c8ab68a095e3f9a19ef0816cd7f0039760858f3eb4b4e8be7a466c8a3b5f2,PodSandboxId:a26522c3d4205d02491ecf32bd122e8d144ae7b6569272f6f8ee59695baccbf1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Ima
ge:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737986800989304931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68777c6-5f9e-44a6-b8cb-1c9f8ee105cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c916e18de1c73a60943849ae15c67f7ee646da8cba2b577f0bf51585b989079,PodSandboxId:548cc3bbe430b8930513f7f2f476e7c848cffcdece72034c5d9775b7f9bc1f65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af
631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737986794505629258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f5h88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f45297c4-5f83-45a6-9f30-d0b16d29ef1d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90efac6917c68038d63a14118f3bb156f497e40e9ad1
7243eb081ca38d088f5,PodSandboxId:8b4984c018663d7e6ca920385de46f500231a3ebad824f063ff512bea4ada26e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737986790983330349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fadf52-7154-403a-9e7c-d6efebab978e,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e0a45028148fd4e8391f8ec068ea1dc26781777b65595f0e77d40e5887b183,PodSandboxId:
a8b62c040eb6fb2cb43d0aeb7e74ef69d37550776f341a2390e7c11b9d2376bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737986780282577344,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1baee3c5773302dcbf66e88a017664,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:507cc4bfd4bacc7171e8b29ab1c7cf6755a54cd4e33644a7e010521389c99e19,PodSandboxId:eb6ed8d17f58c3039433b49564b8b5f4564167c636d90e8
870ec9ddacd177da7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737986780215195274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e884d025cc58fee39959cedced04f6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97beecbf34e8cd79fb655f7ac780b4687619a6508d22de1fae488cf0049312,PodSandboxId:0c77accc1a4c11bee5b61d5981f6c394963573cda5b31141bd2aae673803805a
,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737986780205970791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1bdeb768a2f70bc6bfa23146163e88,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:726cfe5819ce466e645fc14abb234442552bd6186cbb1f21a5e46f21de73c975,PodSandboxId:37576819d5068e04b4ea5a46a934bdae664ba9dce2b576c4f6549ea
7360cf9f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737986780233143816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac465cf17a7fdee35cc7d0daf7e27a45,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33b7fa07-581e-4152-8956-17a321731853 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.576273712Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=462a47c2-2b12-4f9b-8e66-d1f76166e325 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.576369582Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=462a47c2-2b12-4f9b-8e66-d1f76166e325 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.578119818Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2c622a1-7928-4b76-a70b-9e5a20784ae0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.579211243Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987495579183140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2c622a1-7928-4b76-a70b-9e5a20784ae0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.579804099Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae2158de-9202-47f7-a39f-f350121c3ff1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.579934612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae2158de-9202-47f7-a39f-f350121c3ff1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:18:15 addons-097644 crio[657]: time="2025-01-27 14:18:15.580917466Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f81789dc13409c9c89abd809f5b52e2d832e26f91690d41ab03d14e0cc9d3e,PodSandboxId:5352d026f28eb9460b4dd0d0f512e27b3dc8581a39e7da219da2c6c2176d013e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737986947877151509,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0467110-2a34-4ee9-a43d-ff359ed55ac8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31c99b76a81dc6389be76b64f3ef0313d32e7b67b73abfe9d18279163fd6b43a,PodSandboxId:4b29b0e07759182fe6e91d2c67c8a6f765579226662ff94442fc003bd9b459b8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737986940974818963,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-nz5zf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8a084d-8bfd-4fe1-af66-150effc1def4,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.p
orts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:58904f506013f542e9b231a6808678fd0573c3bd8cd952423087c0d6173cf906,PodSandboxId:331461d468a0287af39cdd2959fd770331d81d0a988e0b06742c072ed08529de,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416
951eb,State:CONTAINER_EXITED,CreatedAt:1737986885438608437,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-bzwfx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 299a7a11-a7cd-47e8-a6e3-3bad249287e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9f1bf88ae46d82528c0e29a5bc57c290c29e5ead7bc5458611dfb34f3f9396,PodSandboxId:2b7094e2898b69c6870171d7731d4bbe785d99322137dfbdda62bbf372258d31,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f
6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986883193453747,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k6p8j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3601244e-0ff0-48fd-82e5-191697a50cc1,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:623ee8fa394741d20631eadce0645de0c3077b41bda84ba16e69298ce2fb2aee,PodSandboxId:0a6270a918122fceb54c8234f272b5923906b77f97d500716b438d261d65f1b5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737986810137964591,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-89xv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b98e34d-687f-47aa-8a1f-b8c5c016e93e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05863be1b9fa2047744ae57909fd013effd20461b13a53c0203548ae5163cc17,PodSandboxId:966718e37de57247bbe1778e18c2315e360be73508d87c193cbf3d064b76c59a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[stri
ng]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737986808183321140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e9fbe7-9f01-42c9-abd2-70a375dbf64b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c8ab68a095e3f9a19ef0816cd7f0039760858f3eb4b4e8be7a466c8a3b5f2,PodSandboxId:a26522c3d4205d02491ecf32bd122e8d144ae7b6569272f6f8ee59695baccbf1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Ima
ge:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737986800989304931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68777c6-5f9e-44a6-b8cb-1c9f8ee105cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c916e18de1c73a60943849ae15c67f7ee646da8cba2b577f0bf51585b989079,PodSandboxId:548cc3bbe430b8930513f7f2f476e7c848cffcdece72034c5d9775b7f9bc1f65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af
631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737986794505629258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f5h88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f45297c4-5f83-45a6-9f30-d0b16d29ef1d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90efac6917c68038d63a14118f3bb156f497e40e9ad1
7243eb081ca38d088f5,PodSandboxId:8b4984c018663d7e6ca920385de46f500231a3ebad824f063ff512bea4ada26e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737986790983330349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fadf52-7154-403a-9e7c-d6efebab978e,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e0a45028148fd4e8391f8ec068ea1dc26781777b65595f0e77d40e5887b183,PodSandboxId:
a8b62c040eb6fb2cb43d0aeb7e74ef69d37550776f341a2390e7c11b9d2376bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737986780282577344,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1baee3c5773302dcbf66e88a017664,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:507cc4bfd4bacc7171e8b29ab1c7cf6755a54cd4e33644a7e010521389c99e19,PodSandboxId:eb6ed8d17f58c3039433b49564b8b5f4564167c636d90e8
870ec9ddacd177da7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737986780215195274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e884d025cc58fee39959cedced04f6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97beecbf34e8cd79fb655f7ac780b4687619a6508d22de1fae488cf0049312,PodSandboxId:0c77accc1a4c11bee5b61d5981f6c394963573cda5b31141bd2aae673803805a
,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737986780205970791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1bdeb768a2f70bc6bfa23146163e88,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:726cfe5819ce466e645fc14abb234442552bd6186cbb1f21a5e46f21de73c975,PodSandboxId:37576819d5068e04b4ea5a46a934bdae664ba9dce2b576c4f6549ea
7360cf9f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737986780233143816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac465cf17a7fdee35cc7d0daf7e27a45,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae2158de-9202-47f7-a39f-f350121c3ff1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b1f81789dc134       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          9 minutes ago       Running             busybox                   0                   5352d026f28eb       busybox
	31c99b76a81dc       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             9 minutes ago       Running             controller                0                   4b29b0e077591       ingress-nginx-controller-56d7c84fd4-nz5zf
	58904f506013f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   10 minutes ago      Exited              patch                     0                   331461d468a02       ingress-nginx-admission-patch-bzwfx
	6c9f1bf88ae46       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   10 minutes ago      Exited              create                    0                   2b7094e2898b6       ingress-nginx-admission-create-k6p8j
	623ee8fa39474       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     11 minutes ago      Running             amd-gpu-device-plugin     0                   0a6270a918122       amd-gpu-device-plugin-89xv2
	05863be1b9fa2       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             11 minutes ago      Running             minikube-ingress-dns      0                   966718e37de57       kube-ingress-dns-minikube
	d33c8ab68a095       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             11 minutes ago      Running             storage-provisioner       0                   a26522c3d4205       storage-provisioner
	2c916e18de1c7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             11 minutes ago      Running             coredns                   0                   548cc3bbe430b       coredns-668d6bf9bc-f5h88
	f90efac6917c6       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                             11 minutes ago      Running             kube-proxy                0                   8b4984c018663       kube-proxy-f4zwd
	c5e0a45028148       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             11 minutes ago      Running             etcd                      0                   a8b62c040eb6f       etcd-addons-097644
	726cfe5819ce4       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                             11 minutes ago      Running             kube-scheduler            0                   37576819d5068       kube-scheduler-addons-097644
	507cc4bfd4bac       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                             11 minutes ago      Running             kube-apiserver            0                   eb6ed8d17f58c       kube-apiserver-addons-097644
	ca97beecbf34e       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                             11 minutes ago      Running             kube-controller-manager   0                   0c77accc1a4c1       kube-controller-manager-addons-097644
	
	
	==> coredns [2c916e18de1c73a60943849ae15c67f7ee646da8cba2b577f0bf51585b989079] <==
	[INFO] 10.244.0.8:34771 - 41457 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000328061s
	[INFO] 10.244.0.8:34771 - 47939 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000166673s
	[INFO] 10.244.0.8:34771 - 30775 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000224059s
	[INFO] 10.244.0.8:34771 - 16890 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000093275s
	[INFO] 10.244.0.8:34771 - 16011 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000165914s
	[INFO] 10.244.0.8:34771 - 48692 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000088562s
	[INFO] 10.244.0.8:34771 - 33081 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000156426s
	[INFO] 10.244.0.8:55120 - 55152 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000154188s
	[INFO] 10.244.0.8:55120 - 55445 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000200323s
	[INFO] 10.244.0.8:54848 - 11098 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079484s
	[INFO] 10.244.0.8:54848 - 10854 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000185223s
	[INFO] 10.244.0.8:52222 - 8992 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000065065s
	[INFO] 10.244.0.8:52222 - 8727 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000141435s
	[INFO] 10.244.0.8:35583 - 57125 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071096s
	[INFO] 10.244.0.8:35583 - 56925 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00025462s
	[INFO] 10.244.0.23:58183 - 7007 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00047367s
	[INFO] 10.244.0.23:56358 - 26808 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002454598s
	[INFO] 10.244.0.23:37519 - 11515 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000306136s
	[INFO] 10.244.0.23:56095 - 53118 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000073046s
	[INFO] 10.244.0.23:52826 - 17024 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000167726s
	[INFO] 10.244.0.23:58700 - 37913 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000072195s
	[INFO] 10.244.0.23:59320 - 25584 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001303055s
	[INFO] 10.244.0.23:59906 - 15774 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001635555s
	[INFO] 10.244.0.27:50450 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00056016s
	[INFO] 10.244.0.27:51006 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000141678s
	
	
	==> describe nodes <==
	Name:               addons-097644
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-097644
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743
	                    minikube.k8s.io/name=addons-097644
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T14_06_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-097644
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 14:06:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-097644
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 14:18:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 14:17:38 +0000   Mon, 27 Jan 2025 14:06:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 14:17:38 +0000   Mon, 27 Jan 2025 14:06:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 14:17:38 +0000   Mon, 27 Jan 2025 14:06:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 14:17:38 +0000   Mon, 27 Jan 2025 14:06:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    addons-097644
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 53015ffc2749464aa9b7aa6eb16c09c0
	  System UUID:                53015ffc-2749-464a-a9b7-aa6eb16c09c0
	  Boot ID:                    b226972f-a6fa-415b-9827-3320ed4fb6de
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m21s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-nz5zf    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         11m
	  kube-system                 amd-gpu-device-plugin-89xv2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-668d6bf9bc-f5h88                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 etcd-addons-097644                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         11m
	  kube-system                 kube-apiserver-addons-097644                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-097644        200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-f4zwd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-097644                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 11m   kube-proxy       
	  Normal  Starting                 11m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m   kubelet          Node addons-097644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m   kubelet          Node addons-097644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m   kubelet          Node addons-097644 status is now: NodeHasSufficientPID
	  Normal  NodeReady                11m   kubelet          Node addons-097644 status is now: NodeReady
	  Normal  RegisteredNode           11m   node-controller  Node addons-097644 event: Registered Node addons-097644 in Controller
	
	
	==> dmesg <==
	[  +5.139758] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.148412] systemd-fstab-generator[1389]: Ignoring "noauto" option for root device
	[  +4.853975] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.047705] kauditd_printk_skb: 77 callbacks suppressed
	[  +5.422180] kauditd_printk_skb: 124 callbacks suppressed
	[Jan27 14:07] kauditd_printk_skb: 22 callbacks suppressed
	[ +10.435741] kauditd_printk_skb: 8 callbacks suppressed
	[ +16.990262] kauditd_printk_skb: 2 callbacks suppressed
	[Jan27 14:08] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.413265] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.243017] kauditd_printk_skb: 38 callbacks suppressed
	[Jan27 14:09] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.625061] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.938591] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.071460] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.141586] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.033258] kauditd_printk_skb: 36 callbacks suppressed
	[ +13.978501] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.866607] kauditd_printk_skb: 11 callbacks suppressed
	[Jan27 14:10] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.534292] kauditd_printk_skb: 3 callbacks suppressed
	[ +13.735780] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.706759] kauditd_printk_skb: 24 callbacks suppressed
	[Jan27 14:11] kauditd_printk_skb: 2 callbacks suppressed
	[Jan27 14:15] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [c5e0a45028148fd4e8391f8ec068ea1dc26781777b65595f0e77d40e5887b183] <==
	{"level":"info","ts":"2025-01-27T14:08:10.905648Z","caller":"traceutil/trace.go:171","msg":"trace[1424528345] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1047; }","duration":"221.920939ms","start":"2025-01-27T14:08:10.683718Z","end":"2025-01-27T14:08:10.905639Z","steps":["trace[1424528345] 'agreement among raft nodes before linearized reading'  (duration: 220.979628ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:08:10.904722Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"297.96691ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:08:10.906143Z","caller":"traceutil/trace.go:171","msg":"trace[918443162] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1047; }","duration":"299.34809ms","start":"2025-01-27T14:08:10.606727Z","end":"2025-01-27T14:08:10.906075Z","steps":["trace[918443162] 'agreement among raft nodes before linearized reading'  (duration: 297.968536ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:08:10.904817Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.575661ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:08:10.908832Z","caller":"traceutil/trace.go:171","msg":"trace[148756941] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1047; }","duration":"256.607672ms","start":"2025-01-27T14:08:10.652214Z","end":"2025-01-27T14:08:10.908821Z","steps":["trace[148756941] 'agreement among raft nodes before linearized reading'  (duration: 252.568435ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:08:19.448000Z","caller":"traceutil/trace.go:171","msg":"trace[656750930] linearizableReadLoop","detail":"{readStateIndex:1139; appliedIndex:1138; }","duration":"296.312018ms","start":"2025-01-27T14:08:19.151675Z","end":"2025-01-27T14:08:19.447987Z","steps":["trace[656750930] 'read index received'  (duration: 296.141594ms)","trace[656750930] 'applied index is now lower than readState.Index'  (duration: 169.942µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T14:08:19.448186Z","caller":"traceutil/trace.go:171","msg":"trace[868736163] transaction","detail":"{read_only:false; response_revision:1102; number_of_response:1; }","duration":"383.593879ms","start":"2025-01-27T14:08:19.064585Z","end":"2025-01-27T14:08:19.448179Z","steps":["trace[868736163] 'process raft request'  (duration: 383.321546ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:08:19.448344Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T14:08:19.064555Z","time spent":"383.668202ms","remote":"127.0.0.1:48734","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-097644\" mod_revision:1041 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-097644\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-097644\" > >"}
	{"level":"warn","ts":"2025-01-27T14:08:19.448623Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.485588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:08:19.449317Z","caller":"traceutil/trace.go:171","msg":"trace[684967347] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1102; }","duration":"266.209192ms","start":"2025-01-27T14:08:19.183097Z","end":"2025-01-27T14:08:19.449306Z","steps":["trace[684967347] 'agreement among raft nodes before linearized reading'  (duration: 265.481327ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:08:19.448655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"296.980294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:08:19.449472Z","caller":"traceutil/trace.go:171","msg":"trace[1855013821] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1102; }","duration":"297.811336ms","start":"2025-01-27T14:08:19.151651Z","end":"2025-01-27T14:08:19.449462Z","steps":["trace[1855013821] 'agreement among raft nodes before linearized reading'  (duration: 296.993016ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:09:00.558225Z","caller":"traceutil/trace.go:171","msg":"trace[1913945553] transaction","detail":"{read_only:false; response_revision:1172; number_of_response:1; }","duration":"241.852683ms","start":"2025-01-27T14:09:00.316354Z","end":"2025-01-27T14:09:00.558207Z","steps":["trace[1913945553] 'process raft request'  (duration: 241.733069ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:09:00.758982Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.118372ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:09:00.759127Z","caller":"traceutil/trace.go:171","msg":"trace[1498771159] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1172; }","duration":"109.340678ms","start":"2025-01-27T14:09:00.649774Z","end":"2025-01-27T14:09:00.759114Z","steps":["trace[1498771159] 'range keys from in-memory index tree'  (duration: 109.071803ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:09:46.406933Z","caller":"traceutil/trace.go:171","msg":"trace[1886057008] transaction","detail":"{read_only:false; response_revision:1428; number_of_response:1; }","duration":"194.14911ms","start":"2025-01-27T14:09:46.212756Z","end":"2025-01-27T14:09:46.406905Z","steps":["trace[1886057008] 'process raft request'  (duration: 193.987326ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:09:46.407441Z","caller":"traceutil/trace.go:171","msg":"trace[278748796] linearizableReadLoop","detail":"{readStateIndex:1488; appliedIndex:1488; }","duration":"179.099246ms","start":"2025-01-27T14:09:46.228323Z","end":"2025-01-27T14:09:46.407422Z","steps":["trace[278748796] 'read index received'  (duration: 179.093014ms)","trace[278748796] 'applied index is now lower than readState.Index'  (duration: 5.429µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T14:09:46.407629Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.267358ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab\" limit:1 ","response":"range_response_count:1 size:4006"}
	{"level":"info","ts":"2025-01-27T14:09:46.407673Z","caller":"traceutil/trace.go:171","msg":"trace[2015123404] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab; range_end:; response_count:1; response_revision:1428; }","duration":"179.426533ms","start":"2025-01-27T14:09:46.228236Z","end":"2025-01-27T14:09:46.407663Z","steps":["trace[2015123404] 'agreement among raft nodes before linearized reading'  (duration: 179.274245ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:10:20.372020Z","caller":"traceutil/trace.go:171","msg":"trace[308720478] linearizableReadLoop","detail":"{readStateIndex:1636; appliedIndex:1635; }","duration":"166.921538ms","start":"2025-01-27T14:10:20.205070Z","end":"2025-01-27T14:10:20.371992Z","steps":["trace[308720478] 'read index received'  (duration: 164.842263ms)","trace[308720478] 'applied index is now lower than readState.Index'  (duration: 2.078354ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T14:10:20.372181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.088702ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:10:20.372218Z","caller":"traceutil/trace.go:171","msg":"trace[543223298] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1566; }","duration":"167.165514ms","start":"2025-01-27T14:10:20.205047Z","end":"2025-01-27T14:10:20.372213Z","steps":["trace[543223298] 'agreement among raft nodes before linearized reading'  (duration: 167.085674ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:16:21.777818Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1723}
	{"level":"info","ts":"2025-01-27T14:16:21.815906Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1723,"took":"37.400287ms","hash":3810686376,"current-db-size-bytes":7495680,"current-db-size":"7.5 MB","current-db-size-in-use-bytes":4636672,"current-db-size-in-use":"4.6 MB"}
	{"level":"info","ts":"2025-01-27T14:16:21.816002Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3810686376,"revision":1723,"compact-revision":-1}
	
	
	==> kernel <==
	 14:18:15 up 12 min,  0 users,  load average: 0.76, 0.70, 0.52
	Linux addons-097644 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [507cc4bfd4bacc7171e8b29ab1c7cf6755a54cd4e33644a7e010521389c99e19] <==
	 > logger="UnhandledError"
	E0127 14:07:12.376214       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.247.116:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.247.116:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.247.116:443: connect: connection refused" logger="UnhandledError"
	E0127 14:07:12.378540       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.247.116:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.247.116:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.247.116:443: connect: connection refused" logger="UnhandledError"
	I0127 14:07:12.446208       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0127 14:09:14.350334       1 conn.go:339] Error on socket receive: read tcp 192.168.39.228:8443->192.168.39.1:50822: use of closed network connection
	E0127 14:09:14.546345       1 conn.go:339] Error on socket receive: read tcp 192.168.39.228:8443->192.168.39.1:50860: use of closed network connection
	I0127 14:09:23.868341       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.143.116"}
	I0127 14:10:08.420199       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0127 14:10:09.465650       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0127 14:10:13.397817       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0127 14:10:13.989804       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0127 14:10:14.197521       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.220.0"}
	I0127 14:15:57.159209       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 14:15:57.159292       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 14:15:57.194730       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 14:15:57.195457       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 14:15:57.230063       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 14:15:57.230648       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 14:15:57.236294       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 14:15:57.236732       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 14:15:57.268492       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 14:15:57.268941       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0127 14:15:58.236594       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0127 14:15:58.269294       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0127 14:15:58.359161       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [ca97beecbf34e8cd79fb655f7ac780b4687619a6508d22de1fae488cf0049312] <==
	W0127 14:17:25.638337       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 14:17:25.639322       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0127 14:17:25.640436       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 14:17:25.640530       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0127 14:17:29.588248       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	I0127 14:17:38.481389       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="addons-097644"
	E0127 14:17:44.588826       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	W0127 14:17:47.041722       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 14:17:47.042892       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0127 14:17:47.044110       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 14:17:47.044179       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 14:17:52.234431       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 14:17:52.235590       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0127 14:17:52.236580       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 14:17:52.236650       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 14:17:54.140938       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 14:17:54.142450       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0127 14:17:54.143402       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 14:17:54.143470       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0127 14:17:59.589629       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	W0127 14:18:13.082066       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 14:18:13.083295       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0127 14:18:13.084153       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 14:18:13.084226       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0127 14:18:14.590125       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	
	
	==> kube-proxy [f90efac6917c68038d63a14118f3bb156f497e40e9ad17243eb081ca38d088f5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 14:06:31.963275       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 14:06:31.979022       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.228"]
	E0127 14:06:31.979136       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 14:06:32.077913       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 14:06:32.077966       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 14:06:32.077989       1 server_linux.go:170] "Using iptables Proxier"
	I0127 14:06:32.084140       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 14:06:32.085000       1 server.go:497] "Version info" version="v1.32.1"
	I0127 14:06:32.085035       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:06:32.100525       1 config.go:199] "Starting service config controller"
	I0127 14:06:32.100558       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 14:06:32.100585       1 config.go:105] "Starting endpoint slice config controller"
	I0127 14:06:32.100589       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 14:06:32.101170       1 config.go:329] "Starting node config controller"
	I0127 14:06:32.101178       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 14:06:32.200914       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 14:06:32.200982       1 shared_informer.go:320] Caches are synced for service config
	I0127 14:06:32.201769       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [726cfe5819ce466e645fc14abb234442552bd6186cbb1f21a5e46f21de73c975] <==
	W0127 14:06:23.028289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 14:06:23.028514       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:23.028258       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 14:06:23.028527       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:23.832298       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 14:06:23.832354       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:23.890209       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 14:06:23.890242       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:23.952607       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 14:06:23.952764       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:24.012969       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 14:06:24.013220       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:24.013000       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 14:06:24.013543       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 14:06:24.051624       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 14:06:24.051685       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:24.102044       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 14:06:24.102173       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:24.130067       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 14:06:24.130122       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:24.176207       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 14:06:24.176269       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:24.284632       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 14:06:24.284687       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 14:06:26.404586       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 14:17:17 addons-097644 kubelet[1230]: E0127 14:17:17.804111    1230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="8832c7fb-d1d2-4a01-8fb6-65e44ed2a850"
	Jan 27 14:17:25 addons-097644 kubelet[1230]: E0127 14:17:25.825540    1230 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 14:17:25 addons-097644 kubelet[1230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 14:17:25 addons-097644 kubelet[1230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 14:17:25 addons-097644 kubelet[1230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 14:17:25 addons-097644 kubelet[1230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 14:17:26 addons-097644 kubelet[1230]: E0127 14:17:26.210405    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987446209575556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:17:26 addons-097644 kubelet[1230]: E0127 14:17:26.210451    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987446209575556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:17:27 addons-097644 kubelet[1230]: E0127 14:17:27.805139    1230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e1bbc3eb-e3d8-4361-986a-7836ef9e6bac"
	Jan 27 14:17:29 addons-097644 kubelet[1230]: E0127 14:17:29.807036    1230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="8832c7fb-d1d2-4a01-8fb6-65e44ed2a850"
	Jan 27 14:17:36 addons-097644 kubelet[1230]: E0127 14:17:36.213032    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987456212657930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:17:36 addons-097644 kubelet[1230]: E0127 14:17:36.213075    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987456212657930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:17:40 addons-097644 kubelet[1230]: E0127 14:17:40.803450    1230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="8832c7fb-d1d2-4a01-8fb6-65e44ed2a850"
	Jan 27 14:17:42 addons-097644 kubelet[1230]: E0127 14:17:42.804986    1230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e1bbc3eb-e3d8-4361-986a-7836ef9e6bac"
	Jan 27 14:17:46 addons-097644 kubelet[1230]: E0127 14:17:46.215914    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987466215473799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:17:46 addons-097644 kubelet[1230]: E0127 14:17:46.215961    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987466215473799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:17:48 addons-097644 kubelet[1230]: W0127 14:17:48.254133    1230 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", }. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Jan 27 14:17:48 addons-097644 kubelet[1230]: I0127 14:17:48.803958    1230 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jan 27 14:17:54 addons-097644 kubelet[1230]: E0127 14:17:54.804726    1230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e1bbc3eb-e3d8-4361-986a-7836ef9e6bac"
	Jan 27 14:17:56 addons-097644 kubelet[1230]: E0127 14:17:56.219077    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987476218623442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:17:56 addons-097644 kubelet[1230]: E0127 14:17:56.219129    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987476218623442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:18:06 addons-097644 kubelet[1230]: E0127 14:18:06.224205    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987486223142204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:18:06 addons-097644 kubelet[1230]: E0127 14:18:06.224568    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987486223142204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:18:06 addons-097644 kubelet[1230]: I0127 14:18:06.804224    1230 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-89xv2" secret="" err="secret \"gcp-auth\" not found"
	Jan 27 14:18:09 addons-097644 kubelet[1230]: E0127 14:18:09.804976    1230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e1bbc3eb-e3d8-4361-986a-7836ef9e6bac"
	
	
	==> storage-provisioner [d33c8ab68a095e3f9a19ef0816cd7f0039760858f3eb4b4e8be7a466c8a3b5f2] <==
	I0127 14:06:41.758709       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 14:06:41.803907       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 14:06:41.804042       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 14:06:41.825628       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 14:06:41.825800       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-097644_e66b6578-730a-4596-bb9a-5061db442ad5!
	I0127 14:06:41.826617       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"798f666d-0618-4e6e-9910-6786e4bc55d6", APIVersion:"v1", ResourceVersion:"778", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-097644_e66b6578-730a-4596-bb9a-5061db442ad5 became leader
	I0127 14:06:41.926306       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-097644_e66b6578-730a-4596-bb9a-5061db442ad5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-097644 -n addons-097644
helpers_test.go:261: (dbg) Run:  kubectl --context addons-097644 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-k6p8j ingress-nginx-admission-patch-bzwfx
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-097644 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-k6p8j ingress-nginx-admission-patch-bzwfx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-097644 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-k6p8j ingress-nginx-admission-patch-bzwfx: exit status 1 (84.819559ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-097644/192.168.39.228
	Start Time:       Mon, 27 Jan 2025 14:10:14 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hck28 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hck28:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  8m2s                  default-scheduler  Successfully assigned default/nginx to addons-097644
	  Normal   Pulling    2m10s (x4 over 8m2s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     76s (x4 over 6m47s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     76s (x4 over 6m47s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    7s (x10 over 6m47s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     7s (x10 over 6m47s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-097644/192.168.39.228
	Start Time:       Mon, 27 Jan 2025 14:09:54 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9vdzn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-9vdzn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  8m22s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-097644
	  Warning  Failed     107s (x4 over 7m18s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     107s (x4 over 7m18s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    36s (x10 over 7m17s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     36s (x10 over 7m17s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    21s (x5 over 8m22s)   kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xj65w (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-xj65w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-k6p8j" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bzwfx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-097644 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-k6p8j ingress-nginx-admission-patch-bzwfx: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-097644 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-097644 addons disable ingress-dns --alsologtostderr -v=1: (1.072239854s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-097644 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-097644 addons disable ingress --alsologtostderr -v=1: (7.700372555s)
--- FAIL: TestAddons/parallel/Ingress (491.91s)

                                                
                                    
x
+
TestAddons/parallel/CSI (388.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0127 14:09:35.762931 1012816 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0127 14:09:35.769990 1012816 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0127 14:09:35.770011 1012816 kapi.go:107] duration metric: took 7.096551ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 7.104309ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-097644 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-097644 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8832c7fb-d1d2-4a01-8fb6-65e44ed2a850] Pending
helpers_test.go:344: "task-pv-pod" [8832c7fb-d1d2-4a01-8fb6-65e44ed2a850] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:329: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:506: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:506: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-097644 -n addons-097644
addons_test.go:506: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-01-27 14:15:54.350911423 +0000 UTC m=+626.023705576
addons_test.go:506: (dbg) Run:  kubectl --context addons-097644 describe po task-pv-pod -n default
addons_test.go:506: (dbg) kubectl --context addons-097644 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-097644/192.168.39.228
Start Time:       Mon, 27 Jan 2025 14:09:54 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.28
IPs:
IP:  10.244.0.28
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9vdzn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-9vdzn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/task-pv-pod to addons-097644
Warning  Failed     67s (x3 over 4m56s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     67s (x3 over 4m56s)  kubelet            Error: ErrImagePull
Normal   BackOff    39s (x4 over 4m55s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     39s (x4 over 4m55s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    24s (x4 over 6m)     kubelet            Pulling image "docker.io/nginx"
addons_test.go:506: (dbg) Run:  kubectl --context addons-097644 logs task-pv-pod -n default
addons_test.go:506: (dbg) Non-zero exit: kubectl --context addons-097644 logs task-pv-pod -n default: exit status 1 (77.528388ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:506: kubectl --context addons-097644 logs task-pv-pod -n default: exit status 1
addons_test.go:507: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-097644 -n addons-097644
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-097644 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-097644 logs -n 25: (1.330926602s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-671066 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | -p download-only-671066              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| delete  | -p download-only-671066              | download-only-671066 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| start   | -o=json --download-only              | download-only-223205 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | -p download-only-223205              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| delete  | -p download-only-223205              | download-only-223205 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| delete  | -p download-only-671066              | download-only-671066 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| delete  | -p download-only-223205              | download-only-223205 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| start   | --download-only -p                   | binary-mirror-105715 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | binary-mirror-105715                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46267               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-105715              | binary-mirror-105715 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| addons  | enable dashboard -p                  | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | addons-097644                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | addons-097644                        |                      |         |         |                     |                     |
	| start   | -p addons-097644 --wait=true         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:09 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	| addons  | addons-097644 addons disable         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:09 UTC |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-097644 addons disable         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:09 UTC |
	|         | gcp-auth --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:09 UTC |
	|         | -p addons-097644                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-097644 addons                 | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:09 UTC |
	|         | disable nvidia-device-plugin         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-097644 addons                 | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:09 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-097644 addons                 | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:09 UTC |
	|         | disable cloud-spanner                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-097644 addons disable         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:10 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-097644 addons                 | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:10 UTC | 27 Jan 25 14:10 UTC |
	|         | disable inspektor-gadget             |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| ip      | addons-097644 ip                     | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:10 UTC | 27 Jan 25 14:10 UTC |
	| addons  | addons-097644 addons disable         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:10 UTC | 27 Jan 25 14:10 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-097644 addons disable         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:10 UTC | 27 Jan 25 14:10 UTC |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| addons  | addons-097644 addons disable         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:14 UTC |                     |
	|         | storage-provisioner-rancher          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:05:43
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:05:43.780693 1013451 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:05:43.780813 1013451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:05:43.780825 1013451 out.go:358] Setting ErrFile to fd 2...
	I0127 14:05:43.780832 1013451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:05:43.781030 1013451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 14:05:43.781664 1013451 out.go:352] Setting JSON to false
	I0127 14:05:43.782666 1013451 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":17291,"bootTime":1737969453,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:05:43.782784 1013451 start.go:139] virtualization: kvm guest
	I0127 14:05:43.784893 1013451 out.go:177] * [addons-097644] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:05:43.787056 1013451 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 14:05:43.787061 1013451 notify.go:220] Checking for updates...
	I0127 14:05:43.789034 1013451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:05:43.790539 1013451 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 14:05:43.791834 1013451 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 14:05:43.792947 1013451 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:05:43.794209 1013451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:05:43.795600 1013451 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:05:43.828945 1013451 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 14:05:43.830536 1013451 start.go:297] selected driver: kvm2
	I0127 14:05:43.830549 1013451 start.go:901] validating driver "kvm2" against <nil>
	I0127 14:05:43.830562 1013451 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:05:43.831266 1013451 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:05:43.831371 1013451 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:05:43.846805 1013451 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:05:43.846858 1013451 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 14:05:43.847096 1013451 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:05:43.847130 1013451 cni.go:84] Creating CNI manager for ""
	I0127 14:05:43.847177 1013451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:05:43.847185 1013451 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 14:05:43.847240 1013451 start.go:340] cluster config:
	{Name:addons-097644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-097644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0127 14:05:43.847356 1013451 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:05:43.849197 1013451 out.go:177] * Starting "addons-097644" primary control-plane node in "addons-097644" cluster
	I0127 14:05:43.850425 1013451 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:05:43.850456 1013451 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 14:05:43.850465 1013451 cache.go:56] Caching tarball of preloaded images
	I0127 14:05:43.850551 1013451 preload.go:172] Found /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 14:05:43.850561 1013451 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 14:05:43.850859 1013451 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/config.json ...
	I0127 14:05:43.850881 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/config.json: {Name:mkf76d9208747a70ff9df6e74ebaa16aff66d9f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:05:43.851032 1013451 start.go:360] acquireMachinesLock for addons-097644: {Name:mk884e36253ca066a698970989f20649e5f9cbef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:05:43.851095 1013451 start.go:364] duration metric: took 44.724µs to acquireMachinesLock for "addons-097644"
	I0127 14:05:43.851120 1013451 start.go:93] Provisioning new machine with config: &{Name:addons-097644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-097644 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:05:43.851186 1013451 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 14:05:43.852924 1013451 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0127 14:05:43.853096 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:05:43.853162 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:05:43.867886 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42303
	I0127 14:05:43.868410 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:05:43.868979 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:05:43.869040 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:05:43.869524 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:05:43.869744 1013451 main.go:141] libmachine: (addons-097644) Calling .GetMachineName
	I0127 14:05:43.869931 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:05:43.870113 1013451 start.go:159] libmachine.API.Create for "addons-097644" (driver="kvm2")
	I0127 14:05:43.870140 1013451 client.go:168] LocalClient.Create starting
	I0127 14:05:43.870192 1013451 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem
	I0127 14:05:43.971967 1013451 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem
	I0127 14:05:44.102745 1013451 main.go:141] libmachine: Running pre-create checks...
	I0127 14:05:44.102770 1013451 main.go:141] libmachine: (addons-097644) Calling .PreCreateCheck
	I0127 14:05:44.103352 1013451 main.go:141] libmachine: (addons-097644) Calling .GetConfigRaw
	I0127 14:05:44.103882 1013451 main.go:141] libmachine: Creating machine...
	I0127 14:05:44.103898 1013451 main.go:141] libmachine: (addons-097644) Calling .Create
	I0127 14:05:44.104114 1013451 main.go:141] libmachine: (addons-097644) creating KVM machine...
	I0127 14:05:44.104136 1013451 main.go:141] libmachine: (addons-097644) creating network...
	I0127 14:05:44.105430 1013451 main.go:141] libmachine: (addons-097644) DBG | found existing default KVM network
	I0127 14:05:44.106433 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:44.106217 1013473 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123ba0}
	I0127 14:05:44.106460 1013451 main.go:141] libmachine: (addons-097644) DBG | created network xml: 
	I0127 14:05:44.106474 1013451 main.go:141] libmachine: (addons-097644) DBG | <network>
	I0127 14:05:44.106506 1013451 main.go:141] libmachine: (addons-097644) DBG |   <name>mk-addons-097644</name>
	I0127 14:05:44.106520 1013451 main.go:141] libmachine: (addons-097644) DBG |   <dns enable='no'/>
	I0127 14:05:44.106527 1013451 main.go:141] libmachine: (addons-097644) DBG |   
	I0127 14:05:44.106538 1013451 main.go:141] libmachine: (addons-097644) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0127 14:05:44.106549 1013451 main.go:141] libmachine: (addons-097644) DBG |     <dhcp>
	I0127 14:05:44.106558 1013451 main.go:141] libmachine: (addons-097644) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0127 14:05:44.106566 1013451 main.go:141] libmachine: (addons-097644) DBG |     </dhcp>
	I0127 14:05:44.106585 1013451 main.go:141] libmachine: (addons-097644) DBG |   </ip>
	I0127 14:05:44.106598 1013451 main.go:141] libmachine: (addons-097644) DBG |   
	I0127 14:05:44.106608 1013451 main.go:141] libmachine: (addons-097644) DBG | </network>
	I0127 14:05:44.106620 1013451 main.go:141] libmachine: (addons-097644) DBG | 
	I0127 14:05:44.112205 1013451 main.go:141] libmachine: (addons-097644) DBG | trying to create private KVM network mk-addons-097644 192.168.39.0/24...
	I0127 14:05:44.180056 1013451 main.go:141] libmachine: (addons-097644) DBG | private KVM network mk-addons-097644 192.168.39.0/24 created
	I0127 14:05:44.180144 1013451 main.go:141] libmachine: (addons-097644) setting up store path in /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644 ...
	I0127 14:05:44.180171 1013451 main.go:141] libmachine: (addons-097644) building disk image from file:///home/jenkins/minikube-integration/20321-1005652/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 14:05:44.180189 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:44.180124 1013473 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 14:05:44.180396 1013451 main.go:141] libmachine: (addons-097644) Downloading /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20321-1005652/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 14:05:44.489532 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:44.489354 1013473 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa...
	I0127 14:05:44.674691 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:44.674507 1013473 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/addons-097644.rawdisk...
	I0127 14:05:44.674726 1013451 main.go:141] libmachine: (addons-097644) DBG | Writing magic tar header
	I0127 14:05:44.674736 1013451 main.go:141] libmachine: (addons-097644) DBG | Writing SSH key tar header
	I0127 14:05:44.674747 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:44.674662 1013473 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644 ...
	I0127 14:05:44.674836 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644
	I0127 14:05:44.674866 1013451 main.go:141] libmachine: (addons-097644) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644 (perms=drwx------)
	I0127 14:05:44.674877 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines
	I0127 14:05:44.674890 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 14:05:44.674897 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652
	I0127 14:05:44.674908 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 14:05:44.674915 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home/jenkins
	I0127 14:05:44.674926 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home
	I0127 14:05:44.674933 1013451 main.go:141] libmachine: (addons-097644) DBG | skipping /home - not owner
	I0127 14:05:44.674963 1013451 main.go:141] libmachine: (addons-097644) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines (perms=drwxr-xr-x)
	I0127 14:05:44.674987 1013451 main.go:141] libmachine: (addons-097644) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube (perms=drwxr-xr-x)
	I0127 14:05:44.675015 1013451 main.go:141] libmachine: (addons-097644) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652 (perms=drwxrwxr-x)
	I0127 14:05:44.675025 1013451 main.go:141] libmachine: (addons-097644) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 14:05:44.675035 1013451 main.go:141] libmachine: (addons-097644) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 14:05:44.675040 1013451 main.go:141] libmachine: (addons-097644) creating domain...
	I0127 14:05:44.676087 1013451 main.go:141] libmachine: (addons-097644) define libvirt domain using xml: 
	I0127 14:05:44.676112 1013451 main.go:141] libmachine: (addons-097644) <domain type='kvm'>
	I0127 14:05:44.676119 1013451 main.go:141] libmachine: (addons-097644)   <name>addons-097644</name>
	I0127 14:05:44.676125 1013451 main.go:141] libmachine: (addons-097644)   <memory unit='MiB'>4000</memory>
	I0127 14:05:44.676133 1013451 main.go:141] libmachine: (addons-097644)   <vcpu>2</vcpu>
	I0127 14:05:44.676142 1013451 main.go:141] libmachine: (addons-097644)   <features>
	I0127 14:05:44.676170 1013451 main.go:141] libmachine: (addons-097644)     <acpi/>
	I0127 14:05:44.676190 1013451 main.go:141] libmachine: (addons-097644)     <apic/>
	I0127 14:05:44.676198 1013451 main.go:141] libmachine: (addons-097644)     <pae/>
	I0127 14:05:44.676204 1013451 main.go:141] libmachine: (addons-097644)     
	I0127 14:05:44.676219 1013451 main.go:141] libmachine: (addons-097644)   </features>
	I0127 14:05:44.676234 1013451 main.go:141] libmachine: (addons-097644)   <cpu mode='host-passthrough'>
	I0127 14:05:44.676256 1013451 main.go:141] libmachine: (addons-097644)   
	I0127 14:05:44.676274 1013451 main.go:141] libmachine: (addons-097644)   </cpu>
	I0127 14:05:44.676285 1013451 main.go:141] libmachine: (addons-097644)   <os>
	I0127 14:05:44.676290 1013451 main.go:141] libmachine: (addons-097644)     <type>hvm</type>
	I0127 14:05:44.676295 1013451 main.go:141] libmachine: (addons-097644)     <boot dev='cdrom'/>
	I0127 14:05:44.676302 1013451 main.go:141] libmachine: (addons-097644)     <boot dev='hd'/>
	I0127 14:05:44.676329 1013451 main.go:141] libmachine: (addons-097644)     <bootmenu enable='no'/>
	I0127 14:05:44.676352 1013451 main.go:141] libmachine: (addons-097644)   </os>
	I0127 14:05:44.676365 1013451 main.go:141] libmachine: (addons-097644)   <devices>
	I0127 14:05:44.676382 1013451 main.go:141] libmachine: (addons-097644)     <disk type='file' device='cdrom'>
	I0127 14:05:44.676400 1013451 main.go:141] libmachine: (addons-097644)       <source file='/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/boot2docker.iso'/>
	I0127 14:05:44.676411 1013451 main.go:141] libmachine: (addons-097644)       <target dev='hdc' bus='scsi'/>
	I0127 14:05:44.676436 1013451 main.go:141] libmachine: (addons-097644)       <readonly/>
	I0127 14:05:44.676446 1013451 main.go:141] libmachine: (addons-097644)     </disk>
	I0127 14:05:44.676457 1013451 main.go:141] libmachine: (addons-097644)     <disk type='file' device='disk'>
	I0127 14:05:44.676474 1013451 main.go:141] libmachine: (addons-097644)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 14:05:44.676491 1013451 main.go:141] libmachine: (addons-097644)       <source file='/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/addons-097644.rawdisk'/>
	I0127 14:05:44.676503 1013451 main.go:141] libmachine: (addons-097644)       <target dev='hda' bus='virtio'/>
	I0127 14:05:44.676512 1013451 main.go:141] libmachine: (addons-097644)     </disk>
	I0127 14:05:44.676523 1013451 main.go:141] libmachine: (addons-097644)     <interface type='network'>
	I0127 14:05:44.676535 1013451 main.go:141] libmachine: (addons-097644)       <source network='mk-addons-097644'/>
	I0127 14:05:44.676543 1013451 main.go:141] libmachine: (addons-097644)       <model type='virtio'/>
	I0127 14:05:44.676554 1013451 main.go:141] libmachine: (addons-097644)     </interface>
	I0127 14:05:44.676567 1013451 main.go:141] libmachine: (addons-097644)     <interface type='network'>
	I0127 14:05:44.676577 1013451 main.go:141] libmachine: (addons-097644)       <source network='default'/>
	I0127 14:05:44.676588 1013451 main.go:141] libmachine: (addons-097644)       <model type='virtio'/>
	I0127 14:05:44.676597 1013451 main.go:141] libmachine: (addons-097644)     </interface>
	I0127 14:05:44.676607 1013451 main.go:141] libmachine: (addons-097644)     <serial type='pty'>
	I0127 14:05:44.676615 1013451 main.go:141] libmachine: (addons-097644)       <target port='0'/>
	I0127 14:05:44.676624 1013451 main.go:141] libmachine: (addons-097644)     </serial>
	I0127 14:05:44.676638 1013451 main.go:141] libmachine: (addons-097644)     <console type='pty'>
	I0127 14:05:44.676650 1013451 main.go:141] libmachine: (addons-097644)       <target type='serial' port='0'/>
	I0127 14:05:44.676666 1013451 main.go:141] libmachine: (addons-097644)     </console>
	I0127 14:05:44.676678 1013451 main.go:141] libmachine: (addons-097644)     <rng model='virtio'>
	I0127 14:05:44.676688 1013451 main.go:141] libmachine: (addons-097644)       <backend model='random'>/dev/random</backend>
	I0127 14:05:44.676695 1013451 main.go:141] libmachine: (addons-097644)     </rng>
	I0127 14:05:44.676702 1013451 main.go:141] libmachine: (addons-097644)     
	I0127 14:05:44.676711 1013451 main.go:141] libmachine: (addons-097644)     
	I0127 14:05:44.676720 1013451 main.go:141] libmachine: (addons-097644)   </devices>
	I0127 14:05:44.676726 1013451 main.go:141] libmachine: (addons-097644) </domain>
	I0127 14:05:44.676788 1013451 main.go:141] libmachine: (addons-097644) 
	I0127 14:05:44.681531 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:bc:17:24 in network default
	I0127 14:05:44.682103 1013451 main.go:141] libmachine: (addons-097644) starting domain...
	I0127 14:05:44.682120 1013451 main.go:141] libmachine: (addons-097644) ensuring networks are active...
	I0127 14:05:44.682127 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:44.682898 1013451 main.go:141] libmachine: (addons-097644) Ensuring network default is active
	I0127 14:05:44.683272 1013451 main.go:141] libmachine: (addons-097644) Ensuring network mk-addons-097644 is active
	I0127 14:05:44.683705 1013451 main.go:141] libmachine: (addons-097644) getting domain XML...
	I0127 14:05:44.684437 1013451 main.go:141] libmachine: (addons-097644) creating domain...
	I0127 14:05:45.896162 1013451 main.go:141] libmachine: (addons-097644) waiting for IP...
	I0127 14:05:45.896892 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:45.897344 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:45.897436 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:45.897354 1013473 retry.go:31] will retry after 236.581088ms: waiting for domain to come up
	I0127 14:05:46.135836 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:46.136377 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:46.136409 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:46.136324 1013473 retry.go:31] will retry after 316.29449ms: waiting for domain to come up
	I0127 14:05:46.454651 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:46.455132 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:46.455160 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:46.455064 1013473 retry.go:31] will retry after 470.066632ms: waiting for domain to come up
	I0127 14:05:46.926708 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:46.927233 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:46.927260 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:46.927215 1013473 retry.go:31] will retry after 394.465051ms: waiting for domain to come up
	I0127 14:05:47.322830 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:47.323381 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:47.323413 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:47.323322 1013473 retry.go:31] will retry after 512.0087ms: waiting for domain to come up
	I0127 14:05:47.837180 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:47.837627 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:47.837654 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:47.837597 1013473 retry.go:31] will retry after 602.684619ms: waiting for domain to come up
	I0127 14:05:48.441447 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:48.441865 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:48.441895 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:48.441834 1013473 retry.go:31] will retry after 1.057148427s: waiting for domain to come up
	I0127 14:05:49.501034 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:49.501504 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:49.501527 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:49.501455 1013473 retry.go:31] will retry after 1.147761253s: waiting for domain to come up
	I0127 14:05:50.651314 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:50.651817 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:50.651882 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:50.651766 1013473 retry.go:31] will retry after 1.445396149s: waiting for domain to come up
	I0127 14:05:52.098809 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:52.099216 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:52.099250 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:52.099170 1013473 retry.go:31] will retry after 2.075111556s: waiting for domain to come up
	I0127 14:05:54.175631 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:54.176081 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:54.176131 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:54.176071 1013473 retry.go:31] will retry after 1.984245215s: waiting for domain to come up
	I0127 14:05:56.163386 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:56.163785 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:56.163814 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:56.163743 1013473 retry.go:31] will retry after 2.265903927s: waiting for domain to come up
	I0127 14:05:58.432199 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:58.432532 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:58.432610 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:58.432499 1013473 retry.go:31] will retry after 4.367217291s: waiting for domain to come up
	I0127 14:06:02.802210 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:02.802571 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:06:02.802600 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:06:02.802549 1013473 retry.go:31] will retry after 3.598012851s: waiting for domain to come up
	I0127 14:06:06.403574 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.404009 1013451 main.go:141] libmachine: (addons-097644) found domain IP: 192.168.39.228
	I0127 14:06:06.404030 1013451 main.go:141] libmachine: (addons-097644) reserving static IP address...
	I0127 14:06:06.404042 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has current primary IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.404496 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find host DHCP lease matching {name: "addons-097644", mac: "52:54:00:9d:d4:27", ip: "192.168.39.228"} in network mk-addons-097644
	I0127 14:06:06.482117 1013451 main.go:141] libmachine: (addons-097644) reserved static IP address 192.168.39.228 for domain addons-097644
	I0127 14:06:06.482150 1013451 main.go:141] libmachine: (addons-097644) DBG | Getting to WaitForSSH function...
	I0127 14:06:06.482159 1013451 main.go:141] libmachine: (addons-097644) waiting for SSH...
	I0127 14:06:06.484542 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.484916 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:06.484946 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.485093 1013451 main.go:141] libmachine: (addons-097644) DBG | Using SSH client type: external
	I0127 14:06:06.485123 1013451 main.go:141] libmachine: (addons-097644) DBG | Using SSH private key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa (-rw-------)
	I0127 14:06:06.485171 1013451 main.go:141] libmachine: (addons-097644) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 14:06:06.485189 1013451 main.go:141] libmachine: (addons-097644) DBG | About to run SSH command:
	I0127 14:06:06.485232 1013451 main.go:141] libmachine: (addons-097644) DBG | exit 0
	I0127 14:06:06.609772 1013451 main.go:141] libmachine: (addons-097644) DBG | SSH cmd err, output: <nil>: 
	I0127 14:06:06.610069 1013451 main.go:141] libmachine: (addons-097644) KVM machine creation complete
	I0127 14:06:06.610555 1013451 main.go:141] libmachine: (addons-097644) Calling .GetConfigRaw
	I0127 14:06:06.611165 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:06.611373 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:06.611586 1013451 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 14:06:06.611621 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:06.613057 1013451 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 14:06:06.613073 1013451 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 14:06:06.613081 1013451 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 14:06:06.613090 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:06.615644 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.616035 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:06.616063 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.616199 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:06.616362 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.616508 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.616657 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:06.616824 1013451 main.go:141] libmachine: Using SSH client type: native
	I0127 14:06:06.617054 1013451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 14:06:06.617068 1013451 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 14:06:06.716630 1013451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:06:06.716673 1013451 main.go:141] libmachine: Detecting the provisioner...
	I0127 14:06:06.716681 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:06.719631 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.719945 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:06.719967 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.720264 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:06.720503 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.720685 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.720841 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:06.721000 1013451 main.go:141] libmachine: Using SSH client type: native
	I0127 14:06:06.721236 1013451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 14:06:06.721251 1013451 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 14:06:06.826035 1013451 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 14:06:06.826137 1013451 main.go:141] libmachine: found compatible host: buildroot
	I0127 14:06:06.826152 1013451 main.go:141] libmachine: Provisioning with buildroot...
	I0127 14:06:06.826166 1013451 main.go:141] libmachine: (addons-097644) Calling .GetMachineName
	I0127 14:06:06.826460 1013451 buildroot.go:166] provisioning hostname "addons-097644"
	I0127 14:06:06.826496 1013451 main.go:141] libmachine: (addons-097644) Calling .GetMachineName
	I0127 14:06:06.826730 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:06.829265 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.829710 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:06.829746 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.829916 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:06.830136 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.830299 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.830442 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:06.830601 1013451 main.go:141] libmachine: Using SSH client type: native
	I0127 14:06:06.830779 1013451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 14:06:06.830790 1013451 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-097644 && echo "addons-097644" | sudo tee /etc/hostname
	I0127 14:06:06.943475 1013451 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-097644
	
	I0127 14:06:06.943511 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:06.946454 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.946884 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:06.946916 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.947078 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:06.947278 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.947449 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.947589 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:06.947760 1013451 main.go:141] libmachine: Using SSH client type: native
	I0127 14:06:06.947980 1013451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 14:06:06.948004 1013451 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-097644' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-097644/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-097644' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:06:07.054387 1013451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:06:07.054446 1013451 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20321-1005652/.minikube CaCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20321-1005652/.minikube}
	I0127 14:06:07.054503 1013451 buildroot.go:174] setting up certificates
	I0127 14:06:07.054527 1013451 provision.go:84] configureAuth start
	I0127 14:06:07.054547 1013451 main.go:141] libmachine: (addons-097644) Calling .GetMachineName
	I0127 14:06:07.054845 1013451 main.go:141] libmachine: (addons-097644) Calling .GetIP
	I0127 14:06:07.057428 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.057824 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.057852 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.057989 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.060187 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.060520 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.060546 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.060713 1013451 provision.go:143] copyHostCerts
	I0127 14:06:07.060793 1013451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem (1078 bytes)
	I0127 14:06:07.060906 1013451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem (1123 bytes)
	I0127 14:06:07.060974 1013451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem (1679 bytes)
	I0127 14:06:07.061053 1013451 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem org=jenkins.addons-097644 san=[127.0.0.1 192.168.39.228 addons-097644 localhost minikube]
	I0127 14:06:07.171259 1013451 provision.go:177] copyRemoteCerts
	I0127 14:06:07.171332 1013451 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:06:07.171359 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.173936 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.174300 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.174345 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.174507 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:07.174718 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.174901 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:07.175049 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:07.256072 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:06:07.280263 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 14:06:07.304563 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 14:06:07.328463 1013451 provision.go:87] duration metric: took 273.91293ms to configureAuth
	I0127 14:06:07.328503 1013451 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:06:07.328710 1013451 config.go:182] Loaded profile config "addons-097644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:06:07.328812 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.331515 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.331824 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.331855 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.332095 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:07.332304 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.332494 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.332664 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:07.332827 1013451 main.go:141] libmachine: Using SSH client type: native
	I0127 14:06:07.333034 1013451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 14:06:07.333056 1013451 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 14:06:07.551437 1013451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 14:06:07.551470 1013451 main.go:141] libmachine: Checking connection to Docker...
	I0127 14:06:07.551481 1013451 main.go:141] libmachine: (addons-097644) Calling .GetURL
	I0127 14:06:07.552717 1013451 main.go:141] libmachine: (addons-097644) DBG | using libvirt version 6000000
	I0127 14:06:07.554862 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.555265 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.555309 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.555465 1013451 main.go:141] libmachine: Docker is up and running!
	I0127 14:06:07.555482 1013451 main.go:141] libmachine: Reticulating splines...
	I0127 14:06:07.555493 1013451 client.go:171] duration metric: took 23.685342954s to LocalClient.Create
	I0127 14:06:07.555525 1013451 start.go:167] duration metric: took 23.68541238s to libmachine.API.Create "addons-097644"
	I0127 14:06:07.555552 1013451 start.go:293] postStartSetup for "addons-097644" (driver="kvm2")
	I0127 14:06:07.555570 1013451 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:06:07.555596 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:07.555863 1013451 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:06:07.555889 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.557878 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.558160 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.558198 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.558312 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:07.558488 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.558664 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:07.558817 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:07.640270 1013451 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:06:07.644537 1013451 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:06:07.644585 1013451 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/addons for local assets ...
	I0127 14:06:07.644664 1013451 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/files for local assets ...
	I0127 14:06:07.644692 1013451 start.go:296] duration metric: took 89.13009ms for postStartSetup
	I0127 14:06:07.644732 1013451 main.go:141] libmachine: (addons-097644) Calling .GetConfigRaw
	I0127 14:06:07.645370 1013451 main.go:141] libmachine: (addons-097644) Calling .GetIP
	I0127 14:06:07.648039 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.648405 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.648434 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.648695 1013451 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/config.json ...
	I0127 14:06:07.648902 1013451 start.go:128] duration metric: took 23.797703895s to createHost
	I0127 14:06:07.648927 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.651100 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.651434 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.651481 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.651607 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:07.651822 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.651975 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.652136 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:07.652325 1013451 main.go:141] libmachine: Using SSH client type: native
	I0127 14:06:07.652538 1013451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 14:06:07.652554 1013451 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:06:07.750310 1013451 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737986767.722256723
	
	I0127 14:06:07.750337 1013451 fix.go:216] guest clock: 1737986767.722256723
	I0127 14:06:07.750344 1013451 fix.go:229] Guest: 2025-01-27 14:06:07.722256723 +0000 UTC Remote: 2025-01-27 14:06:07.648915936 +0000 UTC m=+23.906997834 (delta=73.340787ms)
	I0127 14:06:07.750387 1013451 fix.go:200] guest clock delta is within tolerance: 73.340787ms
	I0127 14:06:07.750393 1013451 start.go:83] releasing machines lock for "addons-097644", held for 23.899285781s
	I0127 14:06:07.750420 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:07.750687 1013451 main.go:141] libmachine: (addons-097644) Calling .GetIP
	I0127 14:06:07.753394 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.753884 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.753910 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.754016 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:07.754573 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:07.754725 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:07.754834 1013451 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:06:07.754900 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.754942 1013451 ssh_runner.go:195] Run: cat /version.json
	I0127 14:06:07.754971 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.757717 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.757761 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.758110 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.758137 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.758171 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.758187 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.758397 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:07.758407 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:07.758616 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.758632 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.758733 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:07.758790 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:07.758889 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:07.758968 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:07.862665 1013451 ssh_runner.go:195] Run: systemctl --version
	I0127 14:06:07.869339 1013451 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 14:06:08.030804 1013451 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:06:08.038146 1013451 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:06:08.038222 1013451 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:06:08.055525 1013451 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:06:08.055564 1013451 start.go:495] detecting cgroup driver to use...
	I0127 14:06:08.055650 1013451 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 14:06:08.072349 1013451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 14:06:08.087838 1013451 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:06:08.087904 1013451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:06:08.103124 1013451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:06:08.119044 1013451 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:06:08.243455 1013451 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:06:08.410960 1013451 docker.go:233] disabling docker service ...
	I0127 14:06:08.411040 1013451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:06:08.425578 1013451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:06:08.438593 1013451 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:06:08.564242 1013451 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:06:08.678221 1013451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:06:08.692806 1013451 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:06:08.713320 1013451 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 14:06:08.713400 1013451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.724369 1013451 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 14:06:08.724451 1013451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.735585 1013451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.746053 1013451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.756606 1013451 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:06:08.767332 1013451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.777994 1013451 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.795855 1013451 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.806376 1013451 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:06:08.815691 1013451 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:06:08.815764 1013451 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:06:08.828215 1013451 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:06:08.837677 1013451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:06:08.971639 1013451 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 14:06:09.063916 1013451 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 14:06:09.064038 1013451 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 14:06:09.069097 1013451 start.go:563] Will wait 60s for crictl version
	I0127 14:06:09.069188 1013451 ssh_runner.go:195] Run: which crictl
	I0127 14:06:09.073113 1013451 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:06:09.113259 1013451 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 14:06:09.113366 1013451 ssh_runner.go:195] Run: crio --version
	I0127 14:06:09.142504 1013451 ssh_runner.go:195] Run: crio --version
	I0127 14:06:09.173583 1013451 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 14:06:09.174862 1013451 main.go:141] libmachine: (addons-097644) Calling .GetIP
	I0127 14:06:09.177395 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:09.177812 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:09.177839 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:09.178071 1013451 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 14:06:09.182188 1013451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:06:09.194695 1013451 kubeadm.go:883] updating cluster {Name:addons-097644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-097644 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:06:09.194860 1013451 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:06:09.194924 1013451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:06:09.227895 1013451 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 14:06:09.227979 1013451 ssh_runner.go:195] Run: which lz4
	I0127 14:06:09.232384 1013451 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 14:06:09.236534 1013451 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 14:06:09.236573 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 14:06:10.668374 1013451 crio.go:462] duration metric: took 1.436016004s to copy over tarball
	I0127 14:06:10.668456 1013451 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 14:06:12.991225 1013451 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.322734481s)
	I0127 14:06:12.991265 1013451 crio.go:469] duration metric: took 2.322855117s to extract the tarball
	I0127 14:06:12.991298 1013451 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 14:06:13.029341 1013451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:06:13.076231 1013451 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 14:06:13.076261 1013451 cache_images.go:84] Images are preloaded, skipping loading
	I0127 14:06:13.076271 1013451 kubeadm.go:934] updating node { 192.168.39.228 8443 v1.32.1 crio true true} ...
	I0127 14:06:13.076414 1013451 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-097644 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-097644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:06:13.076504 1013451 ssh_runner.go:195] Run: crio config
	I0127 14:06:13.126305 1013451 cni.go:84] Creating CNI manager for ""
	I0127 14:06:13.126332 1013451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:06:13.126348 1013451 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:06:13.126373 1013451 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.228 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-097644 NodeName:addons-097644 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 14:06:13.126544 1013451 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-097644"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.228"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.228"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:06:13.126625 1013451 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 14:06:13.136556 1013451 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:06:13.136615 1013451 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:06:13.146362 1013451 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0127 14:06:13.163788 1013451 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:06:13.180741 1013451 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0127 14:06:13.198243 1013451 ssh_runner.go:195] Run: grep 192.168.39.228	control-plane.minikube.internal$ /etc/hosts
	I0127 14:06:13.202384 1013451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:06:13.214765 1013451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:06:13.343136 1013451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:06:13.360886 1013451 certs.go:68] Setting up /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644 for IP: 192.168.39.228
	I0127 14:06:13.360930 1013451 certs.go:194] generating shared ca certs ...
	I0127 14:06:13.360952 1013451 certs.go:226] acquiring lock for ca certs: {Name:mk0f815247e4bef2bde3ddcae95639d1cd9cab24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.361149 1013451 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key
	I0127 14:06:13.420822 1013451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt ...
	I0127 14:06:13.420879 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt: {Name:mkc9e8d9cd31bad89b914a0e39146cbc4cb9a566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.421227 1013451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key ...
	I0127 14:06:13.421256 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key: {Name:mk54337b6f7f11134a1a57c50e00b3a25a5764c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.421401 1013451 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key
	I0127 14:06:13.671791 1013451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt ...
	I0127 14:06:13.671827 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt: {Name:mkdf635bff813871fb0a8f71a2bc8202826329c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.672076 1013451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key ...
	I0127 14:06:13.672097 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key: {Name:mkb62b21eecb2941c4e1d8ed131c001defc5b97f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.672212 1013451 certs.go:256] generating profile certs ...
	I0127 14:06:13.672327 1013451 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.key
	I0127 14:06:13.672363 1013451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt with IP's: []
	I0127 14:06:13.991379 1013451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt ...
	I0127 14:06:13.991415 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: {Name:mk7115664fd0816a20da8202516a46d36538c4b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.991616 1013451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.key ...
	I0127 14:06:13.991638 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.key: {Name:mkbc457d424e6b80c2d9c2572cbd34113ffac2c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.991748 1013451 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.key.dc53463b
	I0127 14:06:13.991771 1013451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.crt.dc53463b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.228]
	I0127 14:06:14.087652 1013451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.crt.dc53463b ...
	I0127 14:06:14.087693 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.crt.dc53463b: {Name:mk22529933d8ca851610043569adad4d85cdb151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:14.087885 1013451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.key.dc53463b ...
	I0127 14:06:14.087904 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.key.dc53463b: {Name:mk9f9822d6229d3d1127240b0286c22fc9ac2b51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:14.088018 1013451 certs.go:381] copying /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.crt.dc53463b -> /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.crt
	I0127 14:06:14.088115 1013451 certs.go:385] copying /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.key.dc53463b -> /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.key
	I0127 14:06:14.088186 1013451 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.key
	I0127 14:06:14.088214 1013451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.crt with IP's: []
	I0127 14:06:14.315571 1013451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.crt ...
	I0127 14:06:14.315616 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.crt: {Name:mkf7f0dd114b37a403559f311ca206dc0dfaf354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:14.315850 1013451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.key ...
	I0127 14:06:14.315872 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.key: {Name:mk7c251de1f033a991791c5bacc6c6b2e96630a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:14.316112 1013451 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 14:06:14.316168 1013451 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:06:14.316208 1013451 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:06:14.316249 1013451 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem (1679 bytes)
	I0127 14:06:14.317102 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:06:14.347128 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 14:06:14.372136 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:06:14.397562 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 14:06:14.422996 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 14:06:14.448211 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 14:06:14.474009 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:06:14.501190 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 14:06:14.526766 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:06:14.552500 1013451 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:06:14.570395 1013451 ssh_runner.go:195] Run: openssl version
	I0127 14:06:14.576450 1013451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:06:14.588501 1013451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:06:14.593391 1013451 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 14:06 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:06:14.593460 1013451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:06:14.599581 1013451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:06:14.612023 1013451 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:06:14.616483 1013451 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 14:06:14.616554 1013451 kubeadm.go:392] StartCluster: {Name:addons-097644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-097644 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:06:14.616661 1013451 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 14:06:14.616711 1013451 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:06:14.653932 1013451 cri.go:89] found id: ""
	I0127 14:06:14.654019 1013451 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:06:14.665367 1013451 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:06:14.675999 1013451 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:06:14.686503 1013451 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:06:14.686529 1013451 kubeadm.go:157] found existing configuration files:
	
	I0127 14:06:14.686587 1013451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:06:14.696362 1013451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:06:14.696421 1013451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:06:14.706997 1013451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:06:14.717082 1013451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:06:14.717154 1013451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:06:14.727528 1013451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:06:14.737554 1013451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:06:14.737625 1013451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:06:14.748328 1013451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:06:14.758305 1013451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:06:14.758388 1013451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:06:14.768545 1013451 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:06:14.824105 1013451 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 14:06:14.824161 1013451 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:06:14.954367 1013451 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:06:14.954546 1013451 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:06:14.954688 1013451 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 14:06:14.966475 1013451 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:06:15.100500 1013451 out.go:235]   - Generating certificates and keys ...
	I0127 14:06:15.100639 1013451 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:06:15.100710 1013451 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:06:15.100827 1013451 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 14:06:15.512511 1013451 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 14:06:15.776387 1013451 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 14:06:16.241691 1013451 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 14:06:16.495803 1013451 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 14:06:16.496119 1013451 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-097644 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I0127 14:06:16.692825 1013451 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 14:06:16.693029 1013451 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-097644 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I0127 14:06:16.951084 1013451 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 14:06:17.150130 1013451 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 14:06:17.461000 1013451 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 14:06:17.461403 1013451 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:06:17.774344 1013451 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:06:18.080863 1013451 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 14:06:18.696649 1013451 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:06:18.826173 1013451 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:06:18.926775 1013451 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:06:18.928106 1013451 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:06:18.932397 1013451 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:06:18.934351 1013451 out.go:235]   - Booting up control plane ...
	I0127 14:06:18.934472 1013451 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:06:18.934569 1013451 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:06:18.934649 1013451 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:06:18.950262 1013451 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:06:18.956527 1013451 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:06:18.956606 1013451 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:06:19.083734 1013451 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 14:06:19.083865 1013451 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 14:06:20.084411 1013451 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001422431s
	I0127 14:06:20.084523 1013451 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 14:06:25.084312 1013451 kubeadm.go:310] [api-check] The API server is healthy after 5.002685853s
	I0127 14:06:25.096890 1013451 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 14:06:25.113838 1013451 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 14:06:25.145234 1013451 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 14:06:25.145454 1013451 kubeadm.go:310] [mark-control-plane] Marking the node addons-097644 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 14:06:25.158810 1013451 kubeadm.go:310] [bootstrap-token] Using token: eelxhi.iqqoealhyjynagyr
	I0127 14:06:25.160144 1013451 out.go:235]   - Configuring RBAC rules ...
	I0127 14:06:25.160292 1013451 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 14:06:25.166578 1013451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 14:06:25.179189 1013451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 14:06:25.182767 1013451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 14:06:25.186739 1013451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 14:06:25.193800 1013451 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 14:06:25.491524 1013451 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 14:06:25.946419 1013451 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 14:06:26.491307 1013451 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 14:06:26.491353 1013451 kubeadm.go:310] 
	I0127 14:06:26.491436 1013451 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 14:06:26.491446 1013451 kubeadm.go:310] 
	I0127 14:06:26.491581 1013451 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 14:06:26.491591 1013451 kubeadm.go:310] 
	I0127 14:06:26.491622 1013451 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 14:06:26.491706 1013451 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 14:06:26.491763 1013451 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 14:06:26.491771 1013451 kubeadm.go:310] 
	I0127 14:06:26.491815 1013451 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 14:06:26.491823 1013451 kubeadm.go:310] 
	I0127 14:06:26.491902 1013451 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 14:06:26.491927 1013451 kubeadm.go:310] 
	I0127 14:06:26.491976 1013451 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 14:06:26.492050 1013451 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 14:06:26.492110 1013451 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 14:06:26.492120 1013451 kubeadm.go:310] 
	I0127 14:06:26.492192 1013451 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 14:06:26.492266 1013451 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 14:06:26.492279 1013451 kubeadm.go:310] 
	I0127 14:06:26.492347 1013451 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token eelxhi.iqqoealhyjynagyr \
	I0127 14:06:26.492435 1013451 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 \
	I0127 14:06:26.492455 1013451 kubeadm.go:310] 	--control-plane 
	I0127 14:06:26.492462 1013451 kubeadm.go:310] 
	I0127 14:06:26.492535 1013451 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 14:06:26.492542 1013451 kubeadm.go:310] 
	I0127 14:06:26.492655 1013451 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token eelxhi.iqqoealhyjynagyr \
	I0127 14:06:26.492807 1013451 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 
	I0127 14:06:26.493374 1013451 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:06:26.493713 1013451 cni.go:84] Creating CNI manager for ""
	I0127 14:06:26.493730 1013451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:06:26.495461 1013451 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:06:26.496737 1013451 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:06:26.508895 1013451 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:06:26.531487 1013451 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:06:26.531595 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-097644 minikube.k8s.io/updated_at=2025_01_27T14_06_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743 minikube.k8s.io/name=addons-097644 minikube.k8s.io/primary=true
	I0127 14:06:26.531600 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:26.660204 1013451 ops.go:34] apiserver oom_adj: -16
	I0127 14:06:26.660344 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:27.161225 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:27.660827 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:28.161152 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:28.661068 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:29.160473 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:29.661076 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:30.161022 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:30.660596 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:30.801320 1013451 kubeadm.go:1113] duration metric: took 4.269789638s to wait for elevateKubeSystemPrivileges
	I0127 14:06:30.801428 1013451 kubeadm.go:394] duration metric: took 16.184866129s to StartCluster
	I0127 14:06:30.801479 1013451 settings.go:142] acquiring lock: {Name:mk76722a8563186e0a733747d866232f054026c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:30.801625 1013451 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 14:06:30.802052 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:30.802521 1013451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 14:06:30.802558 1013451 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:06:30.802614 1013451 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0127 14:06:30.802733 1013451 addons.go:69] Setting yakd=true in profile "addons-097644"
	I0127 14:06:30.802749 1013451 addons.go:69] Setting inspektor-gadget=true in profile "addons-097644"
	I0127 14:06:30.802771 1013451 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-097644"
	I0127 14:06:30.802767 1013451 addons.go:69] Setting default-storageclass=true in profile "addons-097644"
	I0127 14:06:30.802782 1013451 addons.go:238] Setting addon inspektor-gadget=true in "addons-097644"
	I0127 14:06:30.802787 1013451 addons.go:69] Setting registry=true in profile "addons-097644"
	I0127 14:06:30.802789 1013451 config.go:182] Loaded profile config "addons-097644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:06:30.802795 1013451 addons.go:69] Setting ingress=true in profile "addons-097644"
	I0127 14:06:30.802809 1013451 addons.go:69] Setting volcano=true in profile "addons-097644"
	I0127 14:06:30.802819 1013451 addons.go:238] Setting addon ingress=true in "addons-097644"
	I0127 14:06:30.802820 1013451 addons.go:238] Setting addon volcano=true in "addons-097644"
	I0127 14:06:30.802827 1013451 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-097644"
	I0127 14:06:30.802840 1013451 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-097644"
	I0127 14:06:30.802851 1013451 addons.go:69] Setting cloud-spanner=true in profile "addons-097644"
	I0127 14:06:30.802867 1013451 addons.go:238] Setting addon cloud-spanner=true in "addons-097644"
	I0127 14:06:30.802875 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.802797 1013451 addons.go:238] Setting addon registry=true in "addons-097644"
	I0127 14:06:30.802879 1013451 addons.go:69] Setting volumesnapshots=true in profile "addons-097644"
	I0127 14:06:30.802883 1013451 addons.go:69] Setting gcp-auth=true in profile "addons-097644"
	I0127 14:06:30.802895 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.802901 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.802905 1013451 addons.go:238] Setting addon volumesnapshots=true in "addons-097644"
	I0127 14:06:30.802916 1013451 mustload.go:65] Loading cluster: addons-097644
	I0127 14:06:30.802923 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.803032 1013451 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-097644"
	I0127 14:06:30.803073 1013451 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-097644"
	I0127 14:06:30.803102 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.803126 1013451 config.go:182] Loaded profile config "addons-097644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:06:30.802805 1013451 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-097644"
	I0127 14:06:30.803177 1013451 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-097644"
	I0127 14:06:30.803393 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.803444 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.803447 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.802869 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.803474 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.803497 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.803523 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.803613 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.803651 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.803721 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.803736 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.803760 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.803765 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.802783 1013451 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-097644"
	I0127 14:06:30.803814 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.803871 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.802818 1013451 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-097644"
	I0127 14:06:30.802800 1013451 addons.go:69] Setting storage-provisioner=true in profile "addons-097644"
	I0127 14:06:30.804156 1013451 addons.go:238] Setting addon storage-provisioner=true in "addons-097644"
	I0127 14:06:30.804206 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.804439 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.804477 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.802876 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.804686 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.804708 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.803834 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.802767 1013451 addons.go:69] Setting metrics-server=true in profile "addons-097644"
	I0127 14:06:30.804942 1013451 addons.go:238] Setting addon metrics-server=true in "addons-097644"
	I0127 14:06:30.804972 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.805340 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.805359 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.805372 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.805400 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.813160 1013451 out.go:177] * Verifying Kubernetes components...
	I0127 14:06:30.802876 1013451 addons.go:69] Setting ingress-dns=true in profile "addons-097644"
	I0127 14:06:30.813527 1013451 addons.go:238] Setting addon ingress-dns=true in "addons-097644"
	I0127 14:06:30.813587 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.814019 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.814072 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.802762 1013451 addons.go:238] Setting addon yakd=true in "addons-097644"
	I0127 14:06:30.814349 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.814935 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.814996 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.815147 1013451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:06:30.802869 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.804130 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.815331 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.824258 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33327
	I0127 14:06:30.825578 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44489
	I0127 14:06:30.829296 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I0127 14:06:30.829387 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40093
	I0127 14:06:30.829572 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.829610 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.829612 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.829656 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.831082 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.831098 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.831220 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.831225 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.831765 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.831788 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.831892 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.831912 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.832037 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.832062 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.832195 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.832345 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.832357 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.832802 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.832840 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.833353 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.833374 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.833419 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.833641 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.834032 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.834058 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.834072 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.834105 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.838453 1013451 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-097644"
	I0127 14:06:30.838522 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.838935 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.838995 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.840603 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44355
	I0127 14:06:30.843186 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46385
	I0127 14:06:30.843795 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.844312 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.844326 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.844777 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.844960 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.849282 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.849730 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.849777 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.863460 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44373
	I0127 14:06:30.864087 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.864757 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.864784 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.865181 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.865783 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.865833 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.873911 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.874553 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I0127 14:06:30.874638 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.874658 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.875026 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.875592 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.875633 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.876937 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45057
	I0127 14:06:30.877116 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44787
	I0127 14:06:30.877252 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.878004 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.878029 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.878487 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.879164 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.879208 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.879477 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41067
	I0127 14:06:30.879682 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34461
	I0127 14:06:30.880336 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.880358 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41223
	I0127 14:06:30.880765 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.881119 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.881138 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.881232 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I0127 14:06:30.881435 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.881449 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.881871 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.881945 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.881977 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34179
	I0127 14:06:30.882565 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.882610 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.882853 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.883356 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.883373 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.883436 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.883527 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.883562 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.883847 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.883908 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.884462 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.884501 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.884735 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.884897 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.884907 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.885047 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.885329 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.885475 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.885487 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.885686 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.885815 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.885828 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.885886 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.886415 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.886456 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.886895 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.886966 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.886997 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.887517 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.887560 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.887602 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.887813 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.890600 1013451 addons.go:238] Setting addon default-storageclass=true in "addons-097644"
	I0127 14:06:30.890648 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.890997 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.891046 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.891842 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.894240 1013451 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0127 14:06:30.894842 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I0127 14:06:30.895286 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.895416 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0127 14:06:30.895847 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.895866 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.896029 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.896491 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.896510 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.896934 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.897068 1013451 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 14:06:30.897222 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.898593 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46541
	I0127 14:06:30.899242 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38531
	I0127 14:06:30.899629 1013451 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 14:06:30.899790 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.899976 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.900109 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.900506 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.900557 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.900630 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.900646 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.900769 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.900778 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.901107 1013451 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 14:06:30.901132 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0127 14:06:30.901138 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.901155 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.901326 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.903634 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.904294 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.906030 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.906143 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.906825 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.906847 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.907181 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.907365 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.907455 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.907556 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.907888 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.908168 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.910334 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0127 14:06:30.910342 1013451 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0127 14:06:30.912373 1013451 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0127 14:06:30.912395 1013451 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0127 14:06:30.912423 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.912492 1013451 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0127 14:06:30.912507 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0127 14:06:30.912528 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.916227 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.916724 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.916749 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.916943 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.917159 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.917417 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.917631 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.917987 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.918511 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.918550 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.918760 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.918938 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.919079 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.919222 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.923687 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44889
	I0127 14:06:30.924139 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.924940 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.924966 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.925060 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36471
	I0127 14:06:30.925654 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.926360 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.926379 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.926947 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.927207 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.928312 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.929602 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.930009 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:30.930023 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:30.932400 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:30.932438 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:30.932446 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:30.932454 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:30.932461 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:30.932905 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:30.932938 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:30.932946 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	W0127 14:06:30.933068 1013451 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0127 14:06:30.933415 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.935674 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
	I0127 14:06:30.935720 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39701
	I0127 14:06:30.935830 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44721
	I0127 14:06:30.936334 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.936432 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.936950 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.936971 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.937146 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.937165 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.937592 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.937657 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0127 14:06:30.937811 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.938038 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.938478 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.938564 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.938581 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.938719 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.938993 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.939067 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43135
	I0127 14:06:30.939447 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.940030 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.940054 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.940132 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.940643 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.940690 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.941538 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.941561 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.941618 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.941662 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.942168 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.942229 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.942674 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35903
	I0127 14:06:30.942829 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.942877 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.943179 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.943303 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.943656 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.943677 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.944080 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0127 14:06:30.944110 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.944168 1013451 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0127 14:06:30.944396 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.944907 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40807
	I0127 14:06:30.945729 1013451 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 14:06:30.945746 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0127 14:06:30.945767 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.947021 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0127 14:06:30.947720 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42045
	I0127 14:06:30.947740 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.947803 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37815
	I0127 14:06:30.948506 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.948668 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.948768 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.949312 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.949184 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.949424 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.949777 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.949798 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.949814 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.949830 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.949831 1013451 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:06:30.950788 1013451 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0127 14:06:30.949879 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0127 14:06:30.950166 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.950190 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.951652 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.951908 1013451 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:06:30.951930 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:06:30.951955 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.952269 1013451 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 14:06:30.952290 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0127 14:06:30.952314 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.952564 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.952635 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36703
	I0127 14:06:30.952847 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.953218 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.953829 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.953849 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.953949 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.954442 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.954245 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0127 14:06:30.954648 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.957753 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0127 14:06:30.957955 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.958028 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.958064 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.958865 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.958661 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.958740 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.959195 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.959217 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.959357 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.959389 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.959494 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.959717 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.959903 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.960115 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.960228 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.960239 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.960472 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.960534 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.960555 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.960505 1013451 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0127 14:06:30.960521 1013451 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0127 14:06:30.960696 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.960722 1013451 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.36.0
	I0127 14:06:30.960806 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.961484 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0127 14:06:30.962231 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.962333 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.962472 1013451 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0127 14:06:30.962490 1013451 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 14:06:30.962854 1013451 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 14:06:30.962875 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.962916 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39873
	I0127 14:06:30.962788 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.963147 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.963248 1013451 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0127 14:06:30.963288 1013451 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0127 14:06:30.963312 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.963411 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.963647 1013451 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 14:06:30.963669 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0127 14:06:30.963686 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.964105 1013451 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0127 14:06:30.964126 1013451 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0127 14:06:30.964145 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.964611 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.964641 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.965199 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.965450 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.965974 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0127 14:06:30.967214 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0127 14:06:30.967970 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.968624 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0127 14:06:30.968647 1013451 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0127 14:06:30.968669 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.968879 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.969411 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.969574 1013451 out.go:177]   - Using image docker.io/registry:2.8.3
	I0127 14:06:30.969589 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.969904 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42225
	I0127 14:06:30.969929 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.969945 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.970191 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.970321 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.970337 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.970367 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.970441 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.970532 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.970725 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.971134 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.971167 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.971138 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.971183 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.971292 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.971326 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.971354 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.971404 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.971423 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.971578 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.971627 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.971673 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.971859 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.971884 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.971921 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.971936 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.971961 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.972328 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.972505 1013451 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0127 14:06:30.972529 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.972896 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.973056 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.973650 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.973898 1013451 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:06:30.973918 1013451 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:06:30.973937 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.974033 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.974299 1013451 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0127 14:06:30.974313 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0127 14:06:30.974330 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.974535 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.974560 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.974828 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.975014 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.975139 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.975250 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	W0127 14:06:30.976492 1013451 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:45740->192.168.39.228:22: read: connection reset by peer
	I0127 14:06:30.976527 1013451 retry.go:31] will retry after 249.98777ms: ssh: handshake failed: read tcp 192.168.39.1:45740->192.168.39.228:22: read: connection reset by peer
	I0127 14:06:30.977856 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.977979 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.978359 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.978399 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.978592 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.978603 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.978618 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.978798 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.978858 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.978981 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.979003 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.979124 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.979153 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.979292 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	W0127 14:06:30.980391 1013451 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:45758->192.168.39.228:22: read: connection reset by peer
	I0127 14:06:30.980418 1013451 retry.go:31] will retry after 282.19412ms: ssh: handshake failed: read tcp 192.168.39.1:45758->192.168.39.228:22: read: connection reset by peer
	I0127 14:06:30.986758 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I0127 14:06:30.987211 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.987797 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.987824 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.988141 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.988375 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.990245 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.992302 1013451 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0127 14:06:30.993765 1013451 out.go:177]   - Using image docker.io/busybox:stable
	I0127 14:06:30.995107 1013451 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 14:06:30.995123 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0127 14:06:30.995143 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.998641 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.999124 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.999163 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.999454 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.999690 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.999838 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:31.000028 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:31.232253 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0127 14:06:31.331831 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 14:06:31.347794 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 14:06:31.426357 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 14:06:31.491578 1013451 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0127 14:06:31.491606 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0127 14:06:31.512213 1013451 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0127 14:06:31.512250 1013451 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0127 14:06:31.515355 1013451 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 14:06:31.515377 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0127 14:06:31.516574 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 14:06:31.525098 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 14:06:31.533157 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:06:31.559468 1013451 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0127 14:06:31.559521 1013451 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0127 14:06:31.575968 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:06:31.648773 1013451 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0127 14:06:31.648804 1013451 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0127 14:06:31.655677 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0127 14:06:31.655706 1013451 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0127 14:06:31.683163 1013451 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 14:06:31.683200 1013451 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 14:06:31.694871 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0127 14:06:31.704356 1013451 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0127 14:06:31.704382 1013451 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0127 14:06:31.744904 1013451 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0127 14:06:31.744940 1013451 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0127 14:06:31.903974 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0127 14:06:31.904017 1013451 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0127 14:06:31.964569 1013451 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0127 14:06:31.964605 1013451 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0127 14:06:31.969199 1013451 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:06:31.969220 1013451 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 14:06:32.044200 1013451 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0127 14:06:32.044228 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0127 14:06:32.127179 1013451 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0127 14:06:32.127220 1013451 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0127 14:06:32.135626 1013451 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.333055604s)
	I0127 14:06:32.135659 1013451 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.320384321s)
	I0127 14:06:32.135752 1013451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:06:32.135838 1013451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 14:06:32.149940 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0127 14:06:32.149986 1013451 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0127 14:06:32.315159 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0127 14:06:32.343031 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0127 14:06:32.343069 1013451 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0127 14:06:32.360427 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:06:32.363253 1013451 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 14:06:32.363282 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0127 14:06:32.374156 1013451 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0127 14:06:32.374180 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0127 14:06:32.467818 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0127 14:06:32.467851 1013451 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0127 14:06:32.668364 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 14:06:32.710295 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0127 14:06:32.747185 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0127 14:06:32.747216 1013451 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0127 14:06:33.065468 1013451 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0127 14:06:33.065504 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0127 14:06:33.337642 1013451 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0127 14:06:33.337736 1013451 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0127 14:06:33.876528 1013451 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0127 14:06:33.876560 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0127 14:06:34.139997 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.907702721s)
	I0127 14:06:34.140087 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:34.140107 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:34.140458 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:34.140487 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:34.140506 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:34.140527 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:34.140800 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:34.140818 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:34.200127 1013451 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0127 14:06:34.200161 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0127 14:06:34.562411 1013451 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 14:06:34.562443 1013451 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0127 14:06:34.714298 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 14:06:36.621630 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.289747033s)
	I0127 14:06:36.621713 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:36.621733 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:36.621631 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.273802077s)
	I0127 14:06:36.621792 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:36.621810 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:36.622093 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:36.622103 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:36.622131 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:36.622142 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:36.622152 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:36.622153 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:36.622192 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:36.622208 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:36.622223 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:36.622252 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:36.622394 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:36.622422 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:36.622480 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:36.622510 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:36.622521 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:36.760227 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:36.760259 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:36.760715 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:36.760775 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:36.760796 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:37.753882 1013451 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0127 14:06:37.753936 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:37.757253 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:37.757684 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:37.757716 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:37.757878 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:37.758108 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:37.758286 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:37.758457 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:38.134471 1013451 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0127 14:06:38.320566 1013451 addons.go:238] Setting addon gcp-auth=true in "addons-097644"
	I0127 14:06:38.320644 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:38.321069 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:38.321130 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:38.336729 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0127 14:06:38.337259 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:38.337802 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:38.337830 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:38.338264 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:38.338744 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:38.338792 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:38.354738 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34493
	I0127 14:06:38.355352 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:38.355944 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:38.355968 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:38.356332 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:38.356545 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:38.358363 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:38.358617 1013451 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0127 14:06:38.358647 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:38.361268 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:38.361655 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:38.361682 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:38.361861 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:38.362040 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:38.362196 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:38.362330 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:39.535502 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.109096844s)
	I0127 14:06:39.535546 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.010420009s)
	I0127 14:06:39.535517 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.018902491s)
	I0127 14:06:39.535592 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.535581 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.535619 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.535628 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.535636 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.002450449s)
	I0127 14:06:39.535631 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.535671 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.535683 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.95968766s)
	I0127 14:06:39.535709 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.535724 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.535686 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.535756 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.840858115s)
	I0127 14:06:39.535612 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.535782 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.535791 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.535840 1013451 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.400060603s)
	I0127 14:06:39.535876 1013451 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.40001441s)
	I0127 14:06:39.535893 1013451 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0127 14:06:39.535966 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.220769901s)
	I0127 14:06:39.536002 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.536013 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.536138 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.175676841s)
	I0127 14:06:39.536161 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.536171 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.536302 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.867903313s)
	W0127 14:06:39.536330 1013451 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0127 14:06:39.536367 1013451 retry.go:31] will retry after 296.657665ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0127 14:06:39.536420 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.826074832s)
	I0127 14:06:39.536451 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.536464 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.536976 1013451 node_ready.go:35] waiting up to 6m0s for node "addons-097644" to be "Ready" ...
	I0127 14:06:39.538246 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538268 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538278 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538286 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538255 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538334 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538358 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538372 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538384 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538395 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538416 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538437 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538457 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538472 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538495 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538521 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538546 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538560 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538568 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538581 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538594 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538529 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538632 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538641 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538644 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538649 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538655 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538658 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538544 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538662 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538437 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538666 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538707 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538732 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538738 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538747 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538754 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538954 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538987 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538994 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538457 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.539033 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.539043 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.539051 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538631 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.539103 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.539111 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.539291 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.539323 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.539331 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.539465 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.539494 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.539501 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.540397 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.540437 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.540445 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.540507 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.540538 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.540545 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.540555 1013451 addons.go:479] Verifying addon metrics-server=true in "addons-097644"
	I0127 14:06:39.540638 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.540659 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.540664 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.540670 1013451 addons.go:479] Verifying addon ingress=true in "addons-097644"
	I0127 14:06:39.540826 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.540849 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.540856 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.540865 1013451 addons.go:479] Verifying addon registry=true in "addons-097644"
	I0127 14:06:39.541201 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.541235 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.541251 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.541333 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.541374 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.541381 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.543517 1013451 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-097644 service yakd-dashboard -n yakd-dashboard
	
	I0127 14:06:39.543527 1013451 out.go:177] * Verifying ingress addon...
	I0127 14:06:39.543529 1013451 out.go:177] * Verifying registry addon...
	I0127 14:06:39.545868 1013451 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0127 14:06:39.546062 1013451 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0127 14:06:39.551413 1013451 node_ready.go:49] node "addons-097644" has status "Ready":"True"
	I0127 14:06:39.551444 1013451 node_ready.go:38] duration metric: took 14.446121ms for node "addons-097644" to be "Ready" ...
	I0127 14:06:39.551456 1013451 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:06:39.591856 1013451 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0127 14:06:39.591887 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:39.591997 1013451 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0127 14:06:39.592022 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:39.604544 1013451 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:39.620217 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.620245 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.620663 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.620712 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.620733 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.833775 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 14:06:40.042238 1013451 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-097644" context rescaled to 1 replicas
	I0127 14:06:40.056864 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:40.057325 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:40.574204 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:40.574352 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:40.691503 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.977142989s)
	I0127 14:06:40.691571 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:40.691567 1013451 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.332922668s)
	I0127 14:06:40.691586 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:40.692022 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:40.692044 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:40.692055 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:40.692080 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:40.692356 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:40.692379 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:40.692393 1013451 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-097644"
	I0127 14:06:40.693820 1013451 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 14:06:40.693819 1013451 out.go:177] * Verifying csi-hostpath-driver addon...
	I0127 14:06:40.695829 1013451 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0127 14:06:40.696785 1013451 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0127 14:06:40.697165 1013451 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0127 14:06:40.697193 1013451 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0127 14:06:40.719430 1013451 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0127 14:06:40.719457 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:40.802113 1013451 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0127 14:06:40.802145 1013451 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0127 14:06:40.994953 1013451 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 14:06:40.995010 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0127 14:06:41.051371 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:41.055369 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:41.085073 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 14:06:41.212968 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:41.550636 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:41.551229 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:41.619011 1013451 pod_ready.go:103] pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:41.704620 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:42.054408 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:42.054655 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:42.202621 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:42.508558 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.674717249s)
	I0127 14:06:42.508636 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:42.508654 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:42.508962 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:42.508984 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:42.508994 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:42.509010 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:42.509270 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:42.509297 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:42.509297 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:42.550865 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:42.552139 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:42.700968 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:43.051426 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:43.051775 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:43.219737 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:43.654172 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:43.659020 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.573886282s)
	I0127 14:06:43.659089 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:43.659111 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:43.659423 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:43.659520 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:43.659535 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:43.659544 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:43.659496 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:43.659831 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:43.659850 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:43.661096 1013451 addons.go:479] Verifying addon gcp-auth=true in "addons-097644"
	I0127 14:06:43.662980 1013451 out.go:177] * Verifying gcp-auth addon...
	I0127 14:06:43.665443 1013451 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0127 14:06:43.667959 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:43.686297 1013451 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0127 14:06:43.686332 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:43.698333 1013451 pod_ready.go:103] pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:43.752116 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:44.051507 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:44.051642 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:44.169983 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:44.202197 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:44.550596 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:44.551695 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:44.669572 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:44.701465 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:45.051101 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:45.051498 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:45.168566 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:45.201519 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:45.551156 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:45.552669 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:45.675646 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:45.702063 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:46.052220 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:46.052234 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:46.112080 1013451 pod_ready.go:103] pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:46.168904 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:46.201719 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:46.551973 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:46.552112 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:46.668877 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:46.701725 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:47.050599 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:47.050979 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:47.169889 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:47.203312 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:47.550817 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:47.551169 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:47.668803 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:47.701344 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:48.053223 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:48.053534 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:48.120721 1013451 pod_ready.go:103] pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:48.172399 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:48.201255 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:48.552152 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:48.562421 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:48.670118 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:48.706743 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:49.056813 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:49.057202 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:49.175007 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:49.207070 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:49.552745 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:49.552809 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:49.670875 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:49.702320 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:50.051877 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:50.052248 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:50.168779 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:50.202479 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:50.551892 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:50.552457 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:50.615652 1013451 pod_ready.go:93] pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:50.615678 1013451 pod_ready.go:82] duration metric: took 11.011100516s for pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.615689 1013451 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-f5h88" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.627270 1013451 pod_ready.go:93] pod "coredns-668d6bf9bc-f5h88" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:50.627306 1013451 pod_ready.go:82] duration metric: took 11.610993ms for pod "coredns-668d6bf9bc-f5h88" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.627316 1013451 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-xk7kv" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.632345 1013451 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-xk7kv" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-xk7kv" not found
	I0127 14:06:50.632372 1013451 pod_ready.go:82] duration metric: took 5.049964ms for pod "coredns-668d6bf9bc-xk7kv" in "kube-system" namespace to be "Ready" ...
	E0127 14:06:50.632383 1013451 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-xk7kv" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-xk7kv" not found
	I0127 14:06:50.632390 1013451 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.637099 1013451 pod_ready.go:93] pod "etcd-addons-097644" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:50.637119 1013451 pod_ready.go:82] duration metric: took 4.724126ms for pod "etcd-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.637128 1013451 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.641577 1013451 pod_ready.go:93] pod "kube-apiserver-addons-097644" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:50.641597 1013451 pod_ready.go:82] duration metric: took 4.462666ms for pod "kube-apiserver-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.641605 1013451 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.669462 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:50.706029 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:50.809340 1013451 pod_ready.go:93] pod "kube-controller-manager-addons-097644" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:50.809365 1013451 pod_ready.go:82] duration metric: took 167.752957ms for pod "kube-controller-manager-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.809377 1013451 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f4zwd" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:51.050450 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:51.051944 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:51.170085 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:51.202947 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:51.208582 1013451 pod_ready.go:93] pod "kube-proxy-f4zwd" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:51.208606 1013451 pod_ready.go:82] duration metric: took 399.222781ms for pod "kube-proxy-f4zwd" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:51.208616 1013451 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:51.551263 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:51.551705 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:51.608807 1013451 pod_ready.go:93] pod "kube-scheduler-addons-097644" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:51.608840 1013451 pod_ready.go:82] duration metric: took 400.21695ms for pod "kube-scheduler-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:51.608854 1013451 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:51.670471 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:51.701367 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:52.050707 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:52.050834 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:52.169284 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:52.200658 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:52.550340 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:52.551185 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:52.668895 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:52.702017 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:53.057413 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:53.057641 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:53.169648 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:53.202006 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:53.550241 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:53.550722 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:53.620587 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:53.669530 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:53.701719 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:54.052792 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:54.053279 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:54.169476 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:54.201306 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:54.551907 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:54.552638 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:54.669077 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:54.701764 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:55.100240 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:55.100296 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:55.182070 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:55.201395 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:55.551761 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:55.551927 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:55.668933 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:55.701923 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:56.050536 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:56.050982 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:56.119811 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:56.168904 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:56.202072 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:56.551874 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:56.552481 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:56.669587 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:56.701617 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:57.050231 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:57.050613 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:57.170169 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:57.201972 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:57.551609 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:57.551795 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:57.670084 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:57.702058 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:58.383183 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:58.383399 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:58.384179 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:58.384242 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:58.387592 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:58.550466 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:58.550887 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:58.668764 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:58.701776 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:59.050306 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:59.050697 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:59.169436 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:59.204311 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:59.560946 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:59.560967 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:59.670919 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:59.702414 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:00.468343 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:00.468634 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:00.469971 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:00.470230 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:00.475121 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:07:00.551178 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:00.552210 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:00.670053 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:00.702754 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:01.051143 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:01.051753 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:01.169521 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:01.202017 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:01.550952 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:01.551011 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:01.669355 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:01.701492 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:02.054133 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:02.054531 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:02.169554 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:02.201828 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:02.553190 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:02.553417 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:02.616135 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:07:02.669251 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:02.702653 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:03.051556 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:03.052058 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:03.168688 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:03.206615 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:03.552205 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:03.552324 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:03.670459 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:03.705277 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:04.050893 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:04.051564 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:04.169123 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:04.271611 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:04.550873 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:04.551002 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:04.618165 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:07:04.669774 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:04.701982 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:05.050574 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:05.050984 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:05.168730 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:05.201868 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:05.550374 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:05.550418 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:05.668407 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:05.701325 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:06.050944 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:06.051773 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:06.169027 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:06.201826 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:06.550446 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:06.551065 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:07.011171 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:07.012800 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:07:07.014528 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:07.051263 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:07.052394 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:07.168896 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:07.202772 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:07.552036 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:07.552265 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:07.669494 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:07.701789 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:08.050016 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:08.050930 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:08.169153 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:08.201129 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:08.552701 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:08.554461 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:08.669806 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:08.702780 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:09.051527 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:09.051791 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:09.115325 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:07:09.169334 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:09.201659 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:09.550572 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:09.550938 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:09.668878 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:09.701776 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:10.051782 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:10.052645 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:10.168877 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:10.201786 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:10.551300 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:10.551673 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:10.669403 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:10.700959 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:11.051149 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:11.051672 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:11.115643 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:07:11.169733 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:11.202417 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:11.552212 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:11.552243 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:11.671629 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:11.701802 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:12.051799 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:12.054435 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:12.170154 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:12.203930 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:12.557266 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:12.557520 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:12.625739 1013451 pod_ready.go:93] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"True"
	I0127 14:07:12.625769 1013451 pod_ready.go:82] duration metric: took 21.016907428s for pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace to be "Ready" ...
	I0127 14:07:12.625780 1013451 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-bs6d4" in "kube-system" namespace to be "Ready" ...
	I0127 14:07:12.635943 1013451 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-bs6d4" in "kube-system" namespace has status "Ready":"True"
	I0127 14:07:12.635969 1013451 pod_ready.go:82] duration metric: took 10.183333ms for pod "nvidia-device-plugin-daemonset-bs6d4" in "kube-system" namespace to be "Ready" ...
	I0127 14:07:12.635988 1013451 pod_ready.go:39] duration metric: took 33.08451816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:07:12.636039 1013451 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:07:12.636109 1013451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:07:12.671346 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:12.688681 1013451 api_server.go:72] duration metric: took 41.886073676s to wait for apiserver process to appear ...
	I0127 14:07:12.688712 1013451 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:07:12.688736 1013451 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 14:07:12.701264 1013451 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
	ok
	I0127 14:07:12.702757 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:12.703236 1013451 api_server.go:141] control plane version: v1.32.1
	I0127 14:07:12.703267 1013451 api_server.go:131] duration metric: took 14.546167ms to wait for apiserver health ...
	I0127 14:07:12.703280 1013451 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:07:12.717932 1013451 system_pods.go:59] 18 kube-system pods found
	I0127 14:07:12.717976 1013451 system_pods.go:61] "amd-gpu-device-plugin-89xv2" [7b98e34d-687f-47aa-8a1f-b8c5c016e93e] Running
	I0127 14:07:12.717984 1013451 system_pods.go:61] "coredns-668d6bf9bc-f5h88" [f45297c4-5f83-45a6-9f30-d0b16d29ef1d] Running
	I0127 14:07:12.717995 1013451 system_pods.go:61] "csi-hostpath-attacher-0" [0e65ff6e-fdeb-4e47-a281-58d2846521dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0127 14:07:12.718012 1013451 system_pods.go:61] "csi-hostpath-resizer-0" [f4b69299-7108-4d71-a19f-c8640d4d9d7b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0127 14:07:12.718024 1013451 system_pods.go:61] "csi-hostpathplugin-8jql5" [cdb87938-f761-462d-aaf8-e4a74f0d8e7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 14:07:12.718035 1013451 system_pods.go:61] "etcd-addons-097644" [15355068-d7bd-4c15-8402-670f796142e0] Running
	I0127 14:07:12.718043 1013451 system_pods.go:61] "kube-apiserver-addons-097644" [3bf8c5a4-9f46-4a38-8c40-03e649c1865a] Running
	I0127 14:07:12.718050 1013451 system_pods.go:61] "kube-controller-manager-addons-097644" [b91db1d0-e6e1-40f4-a230-9496ded8dfbc] Running
	I0127 14:07:12.718057 1013451 system_pods.go:61] "kube-ingress-dns-minikube" [f4e9fbe7-9f01-42c9-abd2-70a375dbf64b] Running
	I0127 14:07:12.718063 1013451 system_pods.go:61] "kube-proxy-f4zwd" [35fadf52-7154-403a-9e7c-d6efebab978e] Running
	I0127 14:07:12.718070 1013451 system_pods.go:61] "kube-scheduler-addons-097644" [64c5112b-77bd-466f-a1ed-e8f2c6512297] Running
	I0127 14:07:12.718076 1013451 system_pods.go:61] "metrics-server-7fbb699795-dr2kc" [d5f1b090-54ae-4efb-ade0-56f8442d821c] Running
	I0127 14:07:12.718082 1013451 system_pods.go:61] "nvidia-device-plugin-daemonset-bs6d4" [157addb8-6c2f-41d6-9d57-8ff984241b50] Running
	I0127 14:07:12.718088 1013451 system_pods.go:61] "registry-6c88467877-gs69t" [56ae8219-917b-43a3-8b3a-9965b018d7ae] Running
	I0127 14:07:12.718096 1013451 system_pods.go:61] "registry-proxy-68qft" [fcd36f1c-2ee6-49df-985c-78afd0b91e4b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0127 14:07:12.718107 1013451 system_pods.go:61] "snapshot-controller-68b874b76f-bncpk" [b196166f-4021-4337-a63b-54cb610bac71] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 14:07:12.718120 1013451 system_pods.go:61] "snapshot-controller-68b874b76f-pqf9k" [1173dcb4-3cf3-44b8-ae6f-7c755536337d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 14:07:12.718127 1013451 system_pods.go:61] "storage-provisioner" [d68777c6-5f9e-44a6-b8cb-1c9f8ee105cf] Running
	I0127 14:07:12.718139 1013451 system_pods.go:74] duration metric: took 14.846764ms to wait for pod list to return data ...
	I0127 14:07:12.718153 1013451 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:07:12.721126 1013451 default_sa.go:45] found service account: "default"
	I0127 14:07:12.721157 1013451 default_sa.go:55] duration metric: took 2.993622ms for default service account to be created ...
	I0127 14:07:12.721171 1013451 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 14:07:12.728179 1013451 system_pods.go:87] 18 kube-system pods found
	I0127 14:07:12.730708 1013451 system_pods.go:105] "amd-gpu-device-plugin-89xv2" [7b98e34d-687f-47aa-8a1f-b8c5c016e93e] Running
	I0127 14:07:12.730727 1013451 system_pods.go:105] "coredns-668d6bf9bc-f5h88" [f45297c4-5f83-45a6-9f30-d0b16d29ef1d] Running
	I0127 14:07:12.730738 1013451 system_pods.go:105] "csi-hostpath-attacher-0" [0e65ff6e-fdeb-4e47-a281-58d2846521dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0127 14:07:12.730748 1013451 system_pods.go:105] "csi-hostpath-resizer-0" [f4b69299-7108-4d71-a19f-c8640d4d9d7b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0127 14:07:12.730761 1013451 system_pods.go:105] "csi-hostpathplugin-8jql5" [cdb87938-f761-462d-aaf8-e4a74f0d8e7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 14:07:12.730773 1013451 system_pods.go:105] "etcd-addons-097644" [15355068-d7bd-4c15-8402-670f796142e0] Running
	I0127 14:07:12.730781 1013451 system_pods.go:105] "kube-apiserver-addons-097644" [3bf8c5a4-9f46-4a38-8c40-03e649c1865a] Running
	I0127 14:07:12.730787 1013451 system_pods.go:105] "kube-controller-manager-addons-097644" [b91db1d0-e6e1-40f4-a230-9496ded8dfbc] Running
	I0127 14:07:12.730794 1013451 system_pods.go:105] "kube-ingress-dns-minikube" [f4e9fbe7-9f01-42c9-abd2-70a375dbf64b] Running
	I0127 14:07:12.730798 1013451 system_pods.go:105] "kube-proxy-f4zwd" [35fadf52-7154-403a-9e7c-d6efebab978e] Running
	I0127 14:07:12.730802 1013451 system_pods.go:105] "kube-scheduler-addons-097644" [64c5112b-77bd-466f-a1ed-e8f2c6512297] Running
	I0127 14:07:12.730806 1013451 system_pods.go:105] "metrics-server-7fbb699795-dr2kc" [d5f1b090-54ae-4efb-ade0-56f8442d821c] Running
	I0127 14:07:12.730811 1013451 system_pods.go:105] "nvidia-device-plugin-daemonset-bs6d4" [157addb8-6c2f-41d6-9d57-8ff984241b50] Running
	I0127 14:07:12.730815 1013451 system_pods.go:105] "registry-6c88467877-gs69t" [56ae8219-917b-43a3-8b3a-9965b018d7ae] Running
	I0127 14:07:12.730821 1013451 system_pods.go:105] "registry-proxy-68qft" [fcd36f1c-2ee6-49df-985c-78afd0b91e4b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0127 14:07:12.730828 1013451 system_pods.go:105] "snapshot-controller-68b874b76f-bncpk" [b196166f-4021-4337-a63b-54cb610bac71] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 14:07:12.730836 1013451 system_pods.go:105] "snapshot-controller-68b874b76f-pqf9k" [1173dcb4-3cf3-44b8-ae6f-7c755536337d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 14:07:12.730843 1013451 system_pods.go:105] "storage-provisioner" [d68777c6-5f9e-44a6-b8cb-1c9f8ee105cf] Running
	I0127 14:07:12.730852 1013451 system_pods.go:147] duration metric: took 9.674182ms to wait for k8s-apps to be running ...
	I0127 14:07:12.730866 1013451 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 14:07:12.730919 1013451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:07:12.776597 1013451 system_svc.go:56] duration metric: took 45.717863ms WaitForService to wait for kubelet
	I0127 14:07:12.776634 1013451 kubeadm.go:582] duration metric: took 41.974036194s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:07:12.776668 1013451 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:07:12.779895 1013451 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 14:07:12.779925 1013451 node_conditions.go:123] node cpu capacity is 2
	I0127 14:07:12.779937 1013451 node_conditions.go:105] duration metric: took 3.263578ms to run NodePressure ...
	I0127 14:07:12.779949 1013451 start.go:241] waiting for startup goroutines ...
	I0127 14:07:13.051978 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:13.052021 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:13.185783 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:13.206287 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:13.550709 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:13.551235 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:13.669317 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:13.701284 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:14.050846 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:14.051195 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:14.168756 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:14.202094 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:14.550255 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:14.551602 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:14.669317 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:14.701627 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:15.053046 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:15.053769 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:15.170995 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:15.203340 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:15.550746 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:15.551289 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:15.669797 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:15.702168 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:16.050144 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:16.050517 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:16.169356 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:16.201683 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:16.550953 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:16.551195 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:16.669784 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:16.702119 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:17.051144 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:17.051141 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:17.468098 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:17.469892 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:17.551344 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:17.551464 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:17.669038 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:17.702218 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:18.051797 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:18.052165 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:18.169400 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:18.202195 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:18.551843 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:18.552250 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:18.668610 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:18.701555 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:19.050623 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:19.051183 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:19.170878 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:19.201626 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:19.563323 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:19.565912 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:19.668974 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:19.702334 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:20.051931 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:20.052068 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:20.169838 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:20.201669 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:20.551529 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:20.551698 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:20.669152 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:20.701960 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:21.051433 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:21.051582 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:21.169879 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:21.201792 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:21.551317 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:21.551547 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:21.669135 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:21.701862 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:22.050599 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:22.050786 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:22.169800 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:22.201820 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:22.549984 1013451 kapi.go:107] duration metric: took 43.003916156s to wait for kubernetes.io/minikube-addons=registry ...
	I0127 14:07:22.550678 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:22.670404 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:22.701421 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:23.051144 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:23.169833 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:23.201769 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:23.550570 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:23.669457 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:23.701823 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:24.050614 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:24.169635 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:24.201972 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:24.549864 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:24.850060 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:24.850512 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:25.051285 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:25.168488 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:25.202049 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:25.550619 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:25.669472 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:25.701812 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:26.050499 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:26.169201 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:26.201034 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:26.550623 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:26.669459 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:26.702346 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:27.051287 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:27.169129 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:27.201158 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:27.551107 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:27.670129 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:27.702139 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:28.050633 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:28.169514 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:28.201745 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:28.549622 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:28.669711 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:28.701840 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:29.049926 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:29.169680 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:29.202737 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:29.550738 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:29.669967 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:29.701832 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:30.051104 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:30.169470 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:30.202270 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:30.550200 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:30.669788 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:30.701729 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:31.050315 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:31.169180 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:31.202245 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:31.550908 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:31.669616 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:31.701623 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:32.049918 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:32.169923 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:32.202237 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:32.550701 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:32.669164 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:32.701141 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:33.050480 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:33.168992 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:33.202153 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:33.550701 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:33.669874 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:33.702366 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:34.050511 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:34.169277 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:34.201418 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:34.550643 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:34.669531 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:34.701256 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:35.054928 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:35.169647 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:35.201868 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:35.549900 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:35.669754 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:35.701752 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:36.050017 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:36.169892 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:36.204020 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:36.551071 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:36.669899 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:36.701717 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:37.050081 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:37.169825 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:37.202223 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:37.550847 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:37.669530 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:37.701678 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:38.050063 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:38.169923 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:38.202463 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:38.549773 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:38.669659 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:38.701996 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:39.050495 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:39.169641 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:39.201887 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:39.550593 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:39.670566 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:39.702072 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:40.050380 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:40.169307 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:40.201420 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:40.550999 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:40.669715 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:40.701440 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:41.050230 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:41.168879 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:41.202325 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:41.550624 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:41.669747 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:41.701809 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:42.050493 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:42.169211 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:42.201520 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:42.550682 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:42.669305 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:42.701468 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:43.050555 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:43.169709 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:43.201742 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:43.550616 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:43.669985 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:43.702199 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:44.050462 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:44.168863 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:44.201969 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:44.550657 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:44.669862 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:44.702322 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:45.051337 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:45.169209 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:45.202025 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:45.550160 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:45.668972 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:45.701927 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:46.050307 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:46.168971 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:46.202059 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:46.551128 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:46.668578 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:46.702834 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:47.050852 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:47.169959 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:47.202008 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:47.551425 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:47.669309 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:47.701110 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:48.051016 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:48.169525 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:48.201587 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:48.550480 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:48.669034 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:48.702415 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:49.050601 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:49.168823 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:49.201585 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:49.550210 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:49.669046 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:49.701888 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:50.050296 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:50.169631 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:50.201503 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:50.551501 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:50.669281 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:50.702511 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:51.050900 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:51.169612 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:51.201816 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:51.552111 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:51.671918 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:51.702548 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:52.050260 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:52.168832 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:52.202188 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:52.550695 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:52.669650 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:52.702333 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:53.052245 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:53.169200 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:53.201611 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:53.550672 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:53.669444 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:53.701777 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:54.051130 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:54.168868 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:54.202046 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:54.550431 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:54.669306 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:54.701904 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:55.051015 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:55.170280 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:55.201214 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:55.553236 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:55.668853 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:55.702340 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:56.051092 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:56.169953 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:56.202452 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:56.551212 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:56.668750 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:56.702523 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:57.050964 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:57.169807 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:57.201803 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:57.550211 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:57.668876 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:57.707900 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:58.050191 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:58.168681 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:58.202039 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:58.550833 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:58.669610 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:58.701767 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:59.051468 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:59.169107 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:59.202715 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:59.551047 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:59.670592 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:59.701979 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:00.050778 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:00.169383 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:00.201834 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:00.551100 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:00.669963 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:00.771411 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:01.054273 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:01.169271 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:01.201602 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:01.550680 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:01.669283 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:01.701522 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:02.052977 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:02.169224 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:02.202291 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:02.550191 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:02.669159 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:02.701813 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:03.049670 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:03.198193 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:03.213735 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:03.551488 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:03.669126 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:03.704574 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:04.050148 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:04.169130 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:04.200961 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:04.550132 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:04.684815 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:04.702791 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:05.177951 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:05.178289 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:05.204849 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:05.551607 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:05.670725 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:05.708916 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:06.050874 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:06.172293 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:06.201971 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:06.551280 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:06.669334 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:06.701067 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:07.051436 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:07.169708 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:07.202011 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:07.552925 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:07.668863 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:07.701641 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:08.050688 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:08.168959 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:08.202195 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:08.550600 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:08.668882 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:08.702599 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:09.051177 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:09.168919 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:09.203167 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:09.550992 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:09.669419 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:09.701472 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:10.051368 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:10.169506 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:10.201966 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:10.923307 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:10.927584 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:10.927913 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:11.050639 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:11.170106 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:11.272444 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:11.552898 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:11.669527 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:11.701595 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:12.050322 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:12.168886 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:12.201829 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:12.550464 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:12.669150 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:12.771687 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:13.050505 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:13.169760 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:13.204975 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:13.551502 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:13.669335 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:13.701321 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:14.050505 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:14.170895 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:14.209305 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:14.550917 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:14.670374 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:14.703360 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:15.056811 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:15.170547 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:15.201903 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:15.551103 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:15.669672 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:15.701742 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:16.051467 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:16.169954 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:16.203694 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:16.551142 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:16.669768 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:16.702805 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:17.051501 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:17.169205 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:17.202951 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:17.551252 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:17.668660 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:17.701825 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:18.051434 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:18.171325 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:18.203909 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:18.551201 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:18.670054 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:18.702443 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:19.050156 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:19.468641 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:19.469516 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:19.550943 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:19.669264 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:19.759545 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:20.058136 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:20.170948 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:20.203636 1013451 kapi.go:107] duration metric: took 1m39.506848143s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0127 14:08:20.550335 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:20.668839 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:21.051466 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:21.169190 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:21.550095 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:21.668827 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:22.051580 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:22.169470 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:22.550664 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:22.669514 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:23.051018 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:23.169957 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:23.550439 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:23.669931 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:24.053965 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:24.169878 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:24.550387 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:24.669803 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:25.056975 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:25.172567 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:25.551153 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:25.670581 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:26.051385 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:26.169530 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:26.551217 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:26.669338 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:27.050638 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:27.170170 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:27.550781 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:27.669538 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:28.051621 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:28.169483 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:28.550676 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:28.669440 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:29.050516 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:29.169375 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:29.551751 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:29.669212 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:30.050939 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:30.169393 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:30.550455 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:30.669253 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:31.050996 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:31.170070 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:31.550206 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:31.668763 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:32.051626 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:32.169320 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:32.551069 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:32.669837 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:33.050330 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:33.168620 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:33.550910 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:33.670232 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:34.051832 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:34.169178 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:34.550237 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:34.668760 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:35.051600 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:35.168763 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:35.551988 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:35.669108 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:36.051060 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:36.170390 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:36.550794 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:36.670426 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:37.050690 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:37.169249 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:37.550576 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:37.669601 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:38.051570 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:38.169093 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:38.550515 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:38.669589 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:39.050556 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:39.169165 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:39.549996 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:39.669744 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:40.051936 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:40.169233 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:40.551315 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:40.669719 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:41.051496 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:41.169933 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:41.550270 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:41.669462 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:42.051430 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:42.169435 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:42.550648 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:42.669559 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:43.051075 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:43.170173 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:43.550411 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:43.669019 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:44.051147 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:44.169943 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:44.550616 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:44.669541 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:45.051936 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:45.169481 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:45.551946 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:45.669610 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:46.051573 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:46.169440 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:46.551239 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:46.669157 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:47.050473 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:47.169232 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:47.550542 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:47.669197 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:48.050628 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:48.169232 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:48.550646 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:48.669371 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:49.050350 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:49.168809 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:49.552159 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:49.668741 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:50.096074 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:50.194902 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:50.551924 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:50.669444 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:51.051559 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:51.169244 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:51.550779 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:51.669835 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:52.051039 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:52.170723 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:52.551544 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:52.669556 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:53.050634 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:53.169497 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:53.551283 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:53.670037 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:54.051147 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:54.170233 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:54.550184 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:54.669816 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:55.051429 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:55.169212 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:55.550803 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:55.668993 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:56.050841 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:56.169885 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:56.550306 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:56.670189 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:57.050387 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:57.170258 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:57.551101 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:57.669797 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:58.051185 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:58.170985 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:58.550560 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:58.676095 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:59.051442 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:59.169894 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:59.551564 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:59.670164 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:00.050493 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:09:00.170055 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:00.581252 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:09:00.780484 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:01.055777 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:09:01.174697 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:01.552096 1013451 kapi.go:107] duration metric: took 2m22.006221923s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0127 14:09:01.671070 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:02.169799 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:02.683707 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:03.169279 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:03.670330 1013451 kapi.go:107] duration metric: took 2m20.004881029s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0127 14:09:03.672423 1013451 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-097644 cluster.
	I0127 14:09:03.673752 1013451 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0127 14:09:03.675214 1013451 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0127 14:09:03.676891 1013451 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner-rancher, nvidia-device-plugin, amd-gpu-device-plugin, metrics-server, storage-provisioner, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0127 14:09:03.678180 1013451 addons.go:514] duration metric: took 2m32.875560916s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner-rancher nvidia-device-plugin amd-gpu-device-plugin metrics-server storage-provisioner inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0127 14:09:03.678236 1013451 start.go:246] waiting for cluster config update ...
	I0127 14:09:03.678259 1013451 start.go:255] writing updated cluster config ...
	I0127 14:09:03.678549 1013451 ssh_runner.go:195] Run: rm -f paused
	I0127 14:09:03.733995 1013451 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 14:09:03.735875 1013451 out.go:177] * Done! kubectl is now configured to use "addons-097644" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.321267096Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987355321241474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1300de9-6e7d-4093-a317-0bbb3fe7cc2e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.321782762Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=befaf3a0-9da9-4713-870e-f1c67f541df7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.321913901Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=befaf3a0-9da9-4713-870e-f1c67f541df7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.322457578Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f81789dc13409c9c89abd809f5b52e2d832e26f91690d41ab03d14e0cc9d3e,PodSandboxId:5352d026f28eb9460b4dd0d0f512e27b3dc8581a39e7da219da2c6c2176d013e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737986947877151509,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0467110-2a34-4ee9-a43d-ff359ed55ac8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31c99b76a81dc6389be76b64f3ef0313d32e7b67b73abfe9d18279163fd6b43a,PodSandboxId:4b29b0e07759182fe6e91d2c67c8a6f765579226662ff94442fc003bd9b459b8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737986940974818963,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-nz5zf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8a084d-8bfd-4fe1-af66-150effc1def4,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.p
orts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fcf79af65e2f7ad903e2fc1428cdac9ca62e96e4b1719adfc6b9554c96fc10fe,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a10987
1454298c,State:CONTAINER_RUNNING,CreatedAt:1737986899554109765,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:068a6720098eadf4f4cc6bf5aaeb9c19235c6135427dcb6635ff3c3296348d66,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6
aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737986897257220941,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf0dc31a82587724a7299f886e953908ae98475a469ab6e2ccec29ff56aa02d,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737986895399396457,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb04775296fca86d00d58e7fc8e6e3f8cf1fcaf194273f66b566147fd5a53515,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737986894242485986,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b3e12efdae42c1c06ab45af4d83f13b32ca2928c06f617e42cce259207eabf,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-s
torage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737986892559604249,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42ee84642ee13587425b84e6c0bb87bf25c5095b74dbe1c4e3fe30c384e6b05,PodSandboxId:dc97cd3cc6432a2c8e83961efb3496f6002bc9963dc0894ea326ba3bfafcb0a5,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737986890998219686,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e65ff6e-fdeb-4e47-a281-58d2846521dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875b7791795d9089a384c42a3f48f7d8c73948964f347e89facfd0db7cb6d872,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata
{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737986888843226637,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7136df91c57dc2b505215a87f2db26c920982ab23199646e92baf8a6114742,
PodSandboxId:67873ca52686ef5f09d5803b960439ff2e9dff63fe57e4e9bc4ae7755a4c3252,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737986887279068126,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4b69299-7108-4d71-a19f-c8640d4d9d7b,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5b85488e514e0b9a70d6ad793832fc5bb440dd1e2
3119f7989c02aac92a0be,PodSandboxId:9f6d46661caff66f1c8478c624be9b9ee4cd73233b81554c51e040ef4ff9f134,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737986885581813198,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-bncpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b196166f-4021-4337-a63b-54cb610bac71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:58904f506013f542e9b231a6808678fd0573c3bd8cd952423087c0d6173cf906,PodSandboxId:331461d468a0287af39cdd2959fd770331d81d0a988e0b06742c072ed08529de,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986885438608437,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-bzwfx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 299a7a11-a7cd-47e8-a6e3-3bad249287e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd6697c28123fa723ad9e7b73bd9376412faac66eaa21c563ada2217e72ab04b,PodSandboxId:afd0e0bf3daf9323242cf3bf126cbca37788c8d89b388a951f2426ac862d252a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737986885280378193,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-pqf9k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1173dcb4-3cf3-44b8-ae6f-7c755536337d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9f1bf88ae46d82528c0e29a5bc57c290c29e5ead7bc5458611dfb34f3f9396,PodSandboxId:2b7094e2898b69c6870171d7731d4bbe785d99322137dfbdda62bbf372258d31,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986883193453747,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k6p8j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3601244e-0ff0-48fd-82e5-191697a50cc1,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:623ee8fa394741d20631eadce0645de0c3077b41bda84ba16e69298ce2fb2aee,PodSandboxId:0a6270a918122fceb54c8234f272b5923906b77f97d500716b438d261d65f1b5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737986810137964591,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-89xv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b98e34d-687f-47aa-8a1f-b8c5c016e93e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05863be1b9fa2047744ae57909fd013effd20461b13a53c0203548ae5163cc17,PodSandboxId:966718e37de57247bbe1778e18c2315e360be73508d87c193cbf3d064b76c59a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737986808183321140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e9fbe7-9f01-42c9-abd2-70a375dbf64b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c8ab68a095e3f9a19ef0816cd7f0039760858f3eb4b4e8be7a466c8a3b5f2,PodSandboxId:a26522c3d4205d02491ecf32bd122e8d144ae7b6569272f6f8ee59695baccbf1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737986800989304931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: d68777c6-5f9e-44a6-b8cb-1c9f8ee105cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c916e18de1c73a60943849ae15c67f7ee646da8cba2b577f0bf51585b989079,PodSandboxId:548cc3bbe430b8930513f7f2f476e7c848cffcdece72034c5d9775b7f9bc1f65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737986794505629258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f5h88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f45297c4-5f83-45a6-9f30
-d0b16d29ef1d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90efac6917c68038d63a14118f3bb156f497e40e9ad17243eb081ca38d088f5,PodSandboxId:8b4984c018663d7e6ca920385de46f500231a3ebad824f063ff512bea4ada26e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737986790983330349,L
abels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fadf52-7154-403a-9e7c-d6efebab978e,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e0a45028148fd4e8391f8ec068ea1dc26781777b65595f0e77d40e5887b183,PodSandboxId:a8b62c040eb6fb2cb43d0aeb7e74ef69d37550776f341a2390e7c11b9d2376bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737986780282577344,Labels:map[string]string{io.kubernetes.c
ontainer.name: etcd,io.kubernetes.pod.name: etcd-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1baee3c5773302dcbf66e88a017664,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:507cc4bfd4bacc7171e8b29ab1c7cf6755a54cd4e33644a7e010521389c99e19,PodSandboxId:eb6ed8d17f58c3039433b49564b8b5f4564167c636d90e8870ec9ddacd177da7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737986780215195274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e884d025cc58fee39959cedced04f6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97beecbf34e8cd79fb655f7ac780b4687619a6508d22de1fae488cf0049312,PodSandboxId:0c77accc1a4c11bee5b61d5981f6c394963573cda5b31141bd2aae673803805a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737986780205970791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kub
ernetes.pod.name: kube-controller-manager-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1bdeb768a2f70bc6bfa23146163e88,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:726cfe5819ce466e645fc14abb234442552bd6186cbb1f21a5e46f21de73c975,PodSandboxId:37576819d5068e04b4ea5a46a934bdae664ba9dce2b576c4f6549ea7360cf9f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737986780233143816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.p
od.name: kube-scheduler-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac465cf17a7fdee35cc7d0daf7e27a45,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=befaf3a0-9da9-4713-870e-f1c67f541df7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.361471829Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c425c1c4-74a9-45ce-a0c7-dbdc28925bf3 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.361542734Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c425c1c4-74a9-45ce-a0c7-dbdc28925bf3 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.363080890Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d4408251-837e-4f54-abd9-d09bced81fc7 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.364132661Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987355364104822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4408251-837e-4f54-abd9-d09bced81fc7 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.364661585Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b31b3166-2343-48fa-9698-b9cd3f5f4cce name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.364719646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b31b3166-2343-48fa-9698-b9cd3f5f4cce name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.365324512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f81789dc13409c9c89abd809f5b52e2d832e26f91690d41ab03d14e0cc9d3e,PodSandboxId:5352d026f28eb9460b4dd0d0f512e27b3dc8581a39e7da219da2c6c2176d013e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737986947877151509,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0467110-2a34-4ee9-a43d-ff359ed55ac8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31c99b76a81dc6389be76b64f3ef0313d32e7b67b73abfe9d18279163fd6b43a,PodSandboxId:4b29b0e07759182fe6e91d2c67c8a6f765579226662ff94442fc003bd9b459b8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737986940974818963,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-nz5zf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8a084d-8bfd-4fe1-af66-150effc1def4,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.p
orts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fcf79af65e2f7ad903e2fc1428cdac9ca62e96e4b1719adfc6b9554c96fc10fe,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a10987
1454298c,State:CONTAINER_RUNNING,CreatedAt:1737986899554109765,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:068a6720098eadf4f4cc6bf5aaeb9c19235c6135427dcb6635ff3c3296348d66,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6
aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737986897257220941,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf0dc31a82587724a7299f886e953908ae98475a469ab6e2ccec29ff56aa02d,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737986895399396457,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb04775296fca86d00d58e7fc8e6e3f8cf1fcaf194273f66b566147fd5a53515,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737986894242485986,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b3e12efdae42c1c06ab45af4d83f13b32ca2928c06f617e42cce259207eabf,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-s
torage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737986892559604249,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42ee84642ee13587425b84e6c0bb87bf25c5095b74dbe1c4e3fe30c384e6b05,PodSandboxId:dc97cd3cc6432a2c8e83961efb3496f6002bc9963dc0894ea326ba3bfafcb0a5,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737986890998219686,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e65ff6e-fdeb-4e47-a281-58d2846521dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875b7791795d9089a384c42a3f48f7d8c73948964f347e89facfd0db7cb6d872,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata
{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737986888843226637,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7136df91c57dc2b505215a87f2db26c920982ab23199646e92baf8a6114742,
PodSandboxId:67873ca52686ef5f09d5803b960439ff2e9dff63fe57e4e9bc4ae7755a4c3252,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737986887279068126,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4b69299-7108-4d71-a19f-c8640d4d9d7b,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5b85488e514e0b9a70d6ad793832fc5bb440dd1e2
3119f7989c02aac92a0be,PodSandboxId:9f6d46661caff66f1c8478c624be9b9ee4cd73233b81554c51e040ef4ff9f134,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737986885581813198,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-bncpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b196166f-4021-4337-a63b-54cb610bac71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:58904f506013f542e9b231a6808678fd0573c3bd8cd952423087c0d6173cf906,PodSandboxId:331461d468a0287af39cdd2959fd770331d81d0a988e0b06742c072ed08529de,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986885438608437,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-bzwfx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 299a7a11-a7cd-47e8-a6e3-3bad249287e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd6697c28123fa723ad9e7b73bd9376412faac66eaa21c563ada2217e72ab04b,PodSandboxId:afd0e0bf3daf9323242cf3bf126cbca37788c8d89b388a951f2426ac862d252a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737986885280378193,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-pqf9k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1173dcb4-3cf3-44b8-ae6f-7c755536337d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9f1bf88ae46d82528c0e29a5bc57c290c29e5ead7bc5458611dfb34f3f9396,PodSandboxId:2b7094e2898b69c6870171d7731d4bbe785d99322137dfbdda62bbf372258d31,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986883193453747,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k6p8j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3601244e-0ff0-48fd-82e5-191697a50cc1,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:623ee8fa394741d20631eadce0645de0c3077b41bda84ba16e69298ce2fb2aee,PodSandboxId:0a6270a918122fceb54c8234f272b5923906b77f97d500716b438d261d65f1b5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737986810137964591,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-89xv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b98e34d-687f-47aa-8a1f-b8c5c016e93e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05863be1b9fa2047744ae57909fd013effd20461b13a53c0203548ae5163cc17,PodSandboxId:966718e37de57247bbe1778e18c2315e360be73508d87c193cbf3d064b76c59a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737986808183321140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e9fbe7-9f01-42c9-abd2-70a375dbf64b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c8ab68a095e3f9a19ef0816cd7f0039760858f3eb4b4e8be7a466c8a3b5f2,PodSandboxId:a26522c3d4205d02491ecf32bd122e8d144ae7b6569272f6f8ee59695baccbf1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737986800989304931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: d68777c6-5f9e-44a6-b8cb-1c9f8ee105cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c916e18de1c73a60943849ae15c67f7ee646da8cba2b577f0bf51585b989079,PodSandboxId:548cc3bbe430b8930513f7f2f476e7c848cffcdece72034c5d9775b7f9bc1f65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737986794505629258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f5h88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f45297c4-5f83-45a6-9f30
-d0b16d29ef1d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90efac6917c68038d63a14118f3bb156f497e40e9ad17243eb081ca38d088f5,PodSandboxId:8b4984c018663d7e6ca920385de46f500231a3ebad824f063ff512bea4ada26e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737986790983330349,L
abels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fadf52-7154-403a-9e7c-d6efebab978e,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e0a45028148fd4e8391f8ec068ea1dc26781777b65595f0e77d40e5887b183,PodSandboxId:a8b62c040eb6fb2cb43d0aeb7e74ef69d37550776f341a2390e7c11b9d2376bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737986780282577344,Labels:map[string]string{io.kubernetes.c
ontainer.name: etcd,io.kubernetes.pod.name: etcd-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1baee3c5773302dcbf66e88a017664,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:507cc4bfd4bacc7171e8b29ab1c7cf6755a54cd4e33644a7e010521389c99e19,PodSandboxId:eb6ed8d17f58c3039433b49564b8b5f4564167c636d90e8870ec9ddacd177da7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737986780215195274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e884d025cc58fee39959cedced04f6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97beecbf34e8cd79fb655f7ac780b4687619a6508d22de1fae488cf0049312,PodSandboxId:0c77accc1a4c11bee5b61d5981f6c394963573cda5b31141bd2aae673803805a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737986780205970791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kub
ernetes.pod.name: kube-controller-manager-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1bdeb768a2f70bc6bfa23146163e88,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:726cfe5819ce466e645fc14abb234442552bd6186cbb1f21a5e46f21de73c975,PodSandboxId:37576819d5068e04b4ea5a46a934bdae664ba9dce2b576c4f6549ea7360cf9f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737986780233143816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.p
od.name: kube-scheduler-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac465cf17a7fdee35cc7d0daf7e27a45,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b31b3166-2343-48fa-9698-b9cd3f5f4cce name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.404389854Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84ceb975-3f31-41e5-a5ab-088defffabc9 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.404485339Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84ceb975-3f31-41e5-a5ab-088defffabc9 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.405631064Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e642ab0-019f-408c-a43a-8119d70bf3bb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.406935333Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987355406909607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e642ab0-019f-408c-a43a-8119d70bf3bb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.407569376Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04873e4c-3999-48a4-acd4-b1ab9071c314 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.407625905Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04873e4c-3999-48a4-acd4-b1ab9071c314 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.408173315Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f81789dc13409c9c89abd809f5b52e2d832e26f91690d41ab03d14e0cc9d3e,PodSandboxId:5352d026f28eb9460b4dd0d0f512e27b3dc8581a39e7da219da2c6c2176d013e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737986947877151509,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0467110-2a34-4ee9-a43d-ff359ed55ac8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31c99b76a81dc6389be76b64f3ef0313d32e7b67b73abfe9d18279163fd6b43a,PodSandboxId:4b29b0e07759182fe6e91d2c67c8a6f765579226662ff94442fc003bd9b459b8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737986940974818963,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-nz5zf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8a084d-8bfd-4fe1-af66-150effc1def4,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.p
orts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fcf79af65e2f7ad903e2fc1428cdac9ca62e96e4b1719adfc6b9554c96fc10fe,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a10987
1454298c,State:CONTAINER_RUNNING,CreatedAt:1737986899554109765,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:068a6720098eadf4f4cc6bf5aaeb9c19235c6135427dcb6635ff3c3296348d66,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6
aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737986897257220941,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf0dc31a82587724a7299f886e953908ae98475a469ab6e2ccec29ff56aa02d,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737986895399396457,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb04775296fca86d00d58e7fc8e6e3f8cf1fcaf194273f66b566147fd5a53515,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737986894242485986,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b3e12efdae42c1c06ab45af4d83f13b32ca2928c06f617e42cce259207eabf,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-s
torage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737986892559604249,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42ee84642ee13587425b84e6c0bb87bf25c5095b74dbe1c4e3fe30c384e6b05,PodSandboxId:dc97cd3cc6432a2c8e83961efb3496f6002bc9963dc0894ea326ba3bfafcb0a5,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737986890998219686,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e65ff6e-fdeb-4e47-a281-58d2846521dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875b7791795d9089a384c42a3f48f7d8c73948964f347e89facfd0db7cb6d872,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata
{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737986888843226637,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7136df91c57dc2b505215a87f2db26c920982ab23199646e92baf8a6114742,
PodSandboxId:67873ca52686ef5f09d5803b960439ff2e9dff63fe57e4e9bc4ae7755a4c3252,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737986887279068126,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4b69299-7108-4d71-a19f-c8640d4d9d7b,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5b85488e514e0b9a70d6ad793832fc5bb440dd1e2
3119f7989c02aac92a0be,PodSandboxId:9f6d46661caff66f1c8478c624be9b9ee4cd73233b81554c51e040ef4ff9f134,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737986885581813198,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-bncpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b196166f-4021-4337-a63b-54cb610bac71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:58904f506013f542e9b231a6808678fd0573c3bd8cd952423087c0d6173cf906,PodSandboxId:331461d468a0287af39cdd2959fd770331d81d0a988e0b06742c072ed08529de,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986885438608437,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-bzwfx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 299a7a11-a7cd-47e8-a6e3-3bad249287e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd6697c28123fa723ad9e7b73bd9376412faac66eaa21c563ada2217e72ab04b,PodSandboxId:afd0e0bf3daf9323242cf3bf126cbca37788c8d89b388a951f2426ac862d252a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737986885280378193,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-pqf9k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1173dcb4-3cf3-44b8-ae6f-7c755536337d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9f1bf88ae46d82528c0e29a5bc57c290c29e5ead7bc5458611dfb34f3f9396,PodSandboxId:2b7094e2898b69c6870171d7731d4bbe785d99322137dfbdda62bbf372258d31,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986883193453747,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k6p8j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3601244e-0ff0-48fd-82e5-191697a50cc1,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:623ee8fa394741d20631eadce0645de0c3077b41bda84ba16e69298ce2fb2aee,PodSandboxId:0a6270a918122fceb54c8234f272b5923906b77f97d500716b438d261d65f1b5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737986810137964591,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-89xv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b98e34d-687f-47aa-8a1f-b8c5c016e93e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05863be1b9fa2047744ae57909fd013effd20461b13a53c0203548ae5163cc17,PodSandboxId:966718e37de57247bbe1778e18c2315e360be73508d87c193cbf3d064b76c59a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737986808183321140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e9fbe7-9f01-42c9-abd2-70a375dbf64b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c8ab68a095e3f9a19ef0816cd7f0039760858f3eb4b4e8be7a466c8a3b5f2,PodSandboxId:a26522c3d4205d02491ecf32bd122e8d144ae7b6569272f6f8ee59695baccbf1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737986800989304931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: d68777c6-5f9e-44a6-b8cb-1c9f8ee105cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c916e18de1c73a60943849ae15c67f7ee646da8cba2b577f0bf51585b989079,PodSandboxId:548cc3bbe430b8930513f7f2f476e7c848cffcdece72034c5d9775b7f9bc1f65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737986794505629258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f5h88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f45297c4-5f83-45a6-9f30
-d0b16d29ef1d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90efac6917c68038d63a14118f3bb156f497e40e9ad17243eb081ca38d088f5,PodSandboxId:8b4984c018663d7e6ca920385de46f500231a3ebad824f063ff512bea4ada26e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737986790983330349,L
abels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fadf52-7154-403a-9e7c-d6efebab978e,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e0a45028148fd4e8391f8ec068ea1dc26781777b65595f0e77d40e5887b183,PodSandboxId:a8b62c040eb6fb2cb43d0aeb7e74ef69d37550776f341a2390e7c11b9d2376bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737986780282577344,Labels:map[string]string{io.kubernetes.c
ontainer.name: etcd,io.kubernetes.pod.name: etcd-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1baee3c5773302dcbf66e88a017664,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:507cc4bfd4bacc7171e8b29ab1c7cf6755a54cd4e33644a7e010521389c99e19,PodSandboxId:eb6ed8d17f58c3039433b49564b8b5f4564167c636d90e8870ec9ddacd177da7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737986780215195274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e884d025cc58fee39959cedced04f6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97beecbf34e8cd79fb655f7ac780b4687619a6508d22de1fae488cf0049312,PodSandboxId:0c77accc1a4c11bee5b61d5981f6c394963573cda5b31141bd2aae673803805a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737986780205970791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kub
ernetes.pod.name: kube-controller-manager-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1bdeb768a2f70bc6bfa23146163e88,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:726cfe5819ce466e645fc14abb234442552bd6186cbb1f21a5e46f21de73c975,PodSandboxId:37576819d5068e04b4ea5a46a934bdae664ba9dce2b576c4f6549ea7360cf9f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737986780233143816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.p
od.name: kube-scheduler-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac465cf17a7fdee35cc7d0daf7e27a45,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04873e4c-3999-48a4-acd4-b1ab9071c314 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.441770103Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0bb3d019-af4f-4580-97bf-273e913b3544 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.441938741Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0bb3d019-af4f-4580-97bf-273e913b3544 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.442981163Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=234ae46a-09bc-4985-b445-d3d0c0c9ddba name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.444718568Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987355444689760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=234ae46a-09bc-4985-b445-d3d0c0c9ddba name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.445572532Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3c0113e-fc90-4de2-add9-9a20c84a315d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.445951482Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3c0113e-fc90-4de2-add9-9a20c84a315d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:15:55 addons-097644 crio[657]: time="2025-01-27 14:15:55.448265826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f81789dc13409c9c89abd809f5b52e2d832e26f91690d41ab03d14e0cc9d3e,PodSandboxId:5352d026f28eb9460b4dd0d0f512e27b3dc8581a39e7da219da2c6c2176d013e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737986947877151509,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0467110-2a34-4ee9-a43d-ff359ed55ac8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31c99b76a81dc6389be76b64f3ef0313d32e7b67b73abfe9d18279163fd6b43a,PodSandboxId:4b29b0e07759182fe6e91d2c67c8a6f765579226662ff94442fc003bd9b459b8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737986940974818963,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-nz5zf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8a084d-8bfd-4fe1-af66-150effc1def4,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.p
orts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fcf79af65e2f7ad903e2fc1428cdac9ca62e96e4b1719adfc6b9554c96fc10fe,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a10987
1454298c,State:CONTAINER_RUNNING,CreatedAt:1737986899554109765,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:068a6720098eadf4f4cc6bf5aaeb9c19235c6135427dcb6635ff3c3296348d66,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6
aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737986897257220941,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf0dc31a82587724a7299f886e953908ae98475a469ab6e2ccec29ff56aa02d,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737986895399396457,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb04775296fca86d00d58e7fc8e6e3f8cf1fcaf194273f66b566147fd5a53515,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737986894242485986,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b3e12efdae42c1c06ab45af4d83f13b32ca2928c06f617e42cce259207eabf,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-s
torage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737986892559604249,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42ee84642ee13587425b84e6c0bb87bf25c5095b74dbe1c4e3fe30c384e6b05,PodSandboxId:dc97cd3cc6432a2c8e83961efb3496f6002bc9963dc0894ea326ba3bfafcb0a5,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737986890998219686,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e65ff6e-fdeb-4e47-a281-58d2846521dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875b7791795d9089a384c42a3f48f7d8c73948964f347e89facfd0db7cb6d872,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata
{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737986888843226637,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7136df91c57dc2b505215a87f2db26c920982ab23199646e92baf8a6114742,
PodSandboxId:67873ca52686ef5f09d5803b960439ff2e9dff63fe57e4e9bc4ae7755a4c3252,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737986887279068126,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4b69299-7108-4d71-a19f-c8640d4d9d7b,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5b85488e514e0b9a70d6ad793832fc5bb440dd1e2
3119f7989c02aac92a0be,PodSandboxId:9f6d46661caff66f1c8478c624be9b9ee4cd73233b81554c51e040ef4ff9f134,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737986885581813198,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-bncpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b196166f-4021-4337-a63b-54cb610bac71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:58904f506013f542e9b231a6808678fd0573c3bd8cd952423087c0d6173cf906,PodSandboxId:331461d468a0287af39cdd2959fd770331d81d0a988e0b06742c072ed08529de,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986885438608437,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-bzwfx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 299a7a11-a7cd-47e8-a6e3-3bad249287e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd6697c28123fa723ad9e7b73bd9376412faac66eaa21c563ada2217e72ab04b,PodSandboxId:afd0e0bf3daf9323242cf3bf126cbca37788c8d89b388a951f2426ac862d252a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737986885280378193,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-pqf9k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1173dcb4-3cf3-44b8-ae6f-7c755536337d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9f1bf88ae46d82528c0e29a5bc57c290c29e5ead7bc5458611dfb34f3f9396,PodSandboxId:2b7094e2898b69c6870171d7731d4bbe785d99322137dfbdda62bbf372258d31,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986883193453747,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k6p8j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3601244e-0ff0-48fd-82e5-191697a50cc1,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:623ee8fa394741d20631eadce0645de0c3077b41bda84ba16e69298ce2fb2aee,PodSandboxId:0a6270a918122fceb54c8234f272b5923906b77f97d500716b438d261d65f1b5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737986810137964591,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-89xv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b98e34d-687f-47aa-8a1f-b8c5c016e93e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05863be1b9fa2047744ae57909fd013effd20461b13a53c0203548ae5163cc17,PodSandboxId:966718e37de57247bbe1778e18c2315e360be73508d87c193cbf3d064b76c59a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737986808183321140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e9fbe7-9f01-42c9-abd2-70a375dbf64b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c8ab68a095e3f9a19ef0816cd7f0039760858f3eb4b4e8be7a466c8a3b5f2,PodSandboxId:a26522c3d4205d02491ecf32bd122e8d144ae7b6569272f6f8ee59695baccbf1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737986800989304931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: d68777c6-5f9e-44a6-b8cb-1c9f8ee105cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c916e18de1c73a60943849ae15c67f7ee646da8cba2b577f0bf51585b989079,PodSandboxId:548cc3bbe430b8930513f7f2f476e7c848cffcdece72034c5d9775b7f9bc1f65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737986794505629258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f5h88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f45297c4-5f83-45a6-9f30
-d0b16d29ef1d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90efac6917c68038d63a14118f3bb156f497e40e9ad17243eb081ca38d088f5,PodSandboxId:8b4984c018663d7e6ca920385de46f500231a3ebad824f063ff512bea4ada26e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737986790983330349,L
abels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fadf52-7154-403a-9e7c-d6efebab978e,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e0a45028148fd4e8391f8ec068ea1dc26781777b65595f0e77d40e5887b183,PodSandboxId:a8b62c040eb6fb2cb43d0aeb7e74ef69d37550776f341a2390e7c11b9d2376bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737986780282577344,Labels:map[string]string{io.kubernetes.c
ontainer.name: etcd,io.kubernetes.pod.name: etcd-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1baee3c5773302dcbf66e88a017664,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:507cc4bfd4bacc7171e8b29ab1c7cf6755a54cd4e33644a7e010521389c99e19,PodSandboxId:eb6ed8d17f58c3039433b49564b8b5f4564167c636d90e8870ec9ddacd177da7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737986780215195274,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e884d025cc58fee39959cedced04f6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97beecbf34e8cd79fb655f7ac780b4687619a6508d22de1fae488cf0049312,PodSandboxId:0c77accc1a4c11bee5b61d5981f6c394963573cda5b31141bd2aae673803805a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737986780205970791,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kub
ernetes.pod.name: kube-controller-manager-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1bdeb768a2f70bc6bfa23146163e88,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:726cfe5819ce466e645fc14abb234442552bd6186cbb1f21a5e46f21de73c975,PodSandboxId:37576819d5068e04b4ea5a46a934bdae664ba9dce2b576c4f6549ea7360cf9f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737986780233143816,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.p
od.name: kube-scheduler-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac465cf17a7fdee35cc7d0daf7e27a45,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3c0113e-fc90-4de2-add9-9a20c84a315d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	b1f81789dc134       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   5352d026f28eb       busybox
	31c99b76a81dc       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b                             6 minutes ago       Running             controller                               0                   4b29b0e077591       ingress-nginx-controller-56d7c84fd4-nz5zf
	fcf79af65e2f7       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          7 minutes ago       Running             csi-snapshotter                          0                   c2c0fbe716ada       csi-hostpathplugin-8jql5
	068a6720098ea       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   c2c0fbe716ada       csi-hostpathplugin-8jql5
	3bf0dc31a8258       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   c2c0fbe716ada       csi-hostpathplugin-8jql5
	eb04775296fca       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           7 minutes ago       Running             hostpath                                 0                   c2c0fbe716ada       csi-hostpathplugin-8jql5
	83b3e12efdae4       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   c2c0fbe716ada       csi-hostpathplugin-8jql5
	f42ee84642ee1       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             7 minutes ago       Running             csi-attacher                             0                   dc97cd3cc6432       csi-hostpath-attacher-0
	875b7791795d9       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   7 minutes ago       Running             csi-external-health-monitor-controller   0                   c2c0fbe716ada       csi-hostpathplugin-8jql5
	ef7136df91c57       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago       Running             csi-resizer                              0                   67873ca52686e       csi-hostpath-resizer-0
	2e5b85488e514       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   9f6d46661caff       snapshot-controller-68b874b76f-bncpk
	58904f506013f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f                   7 minutes ago       Exited              patch                                    0                   331461d468a02       ingress-nginx-admission-patch-bzwfx
	fd6697c28123f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   afd0e0bf3daf9       snapshot-controller-68b874b76f-pqf9k
	6c9f1bf88ae46       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f                   7 minutes ago       Exited              create                                   0                   2b7094e2898b6       ingress-nginx-admission-create-k6p8j
	623ee8fa39474       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     9 minutes ago       Running             amd-gpu-device-plugin                    0                   0a6270a918122       amd-gpu-device-plugin-89xv2
	05863be1b9fa2       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             9 minutes ago       Running             minikube-ingress-dns                     0                   966718e37de57       kube-ingress-dns-minikube
	d33c8ab68a095       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             9 minutes ago       Running             storage-provisioner                      0                   a26522c3d4205       storage-provisioner
	2c916e18de1c7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             9 minutes ago       Running             coredns                                  0                   548cc3bbe430b       coredns-668d6bf9bc-f5h88
	f90efac6917c6       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                                             9 minutes ago       Running             kube-proxy                               0                   8b4984c018663       kube-proxy-f4zwd
	c5e0a45028148       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                                             9 minutes ago       Running             etcd                                     0                   a8b62c040eb6f       etcd-addons-097644
	726cfe5819ce4       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                                             9 minutes ago       Running             kube-scheduler                           0                   37576819d5068       kube-scheduler-addons-097644
	507cc4bfd4bac       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                                             9 minutes ago       Running             kube-apiserver                           0                   eb6ed8d17f58c       kube-apiserver-addons-097644
	ca97beecbf34e       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                                             9 minutes ago       Running             kube-controller-manager                  0                   0c77accc1a4c1       kube-controller-manager-addons-097644
	
	
	==> coredns [2c916e18de1c73a60943849ae15c67f7ee646da8cba2b577f0bf51585b989079] <==
	[INFO] 10.244.0.8:34771 - 41457 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000328061s
	[INFO] 10.244.0.8:34771 - 47939 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000166673s
	[INFO] 10.244.0.8:34771 - 30775 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000224059s
	[INFO] 10.244.0.8:34771 - 16890 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000093275s
	[INFO] 10.244.0.8:34771 - 16011 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000165914s
	[INFO] 10.244.0.8:34771 - 48692 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000088562s
	[INFO] 10.244.0.8:34771 - 33081 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000156426s
	[INFO] 10.244.0.8:55120 - 55152 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000154188s
	[INFO] 10.244.0.8:55120 - 55445 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000200323s
	[INFO] 10.244.0.8:54848 - 11098 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079484s
	[INFO] 10.244.0.8:54848 - 10854 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000185223s
	[INFO] 10.244.0.8:52222 - 8992 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000065065s
	[INFO] 10.244.0.8:52222 - 8727 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000141435s
	[INFO] 10.244.0.8:35583 - 57125 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071096s
	[INFO] 10.244.0.8:35583 - 56925 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00025462s
	[INFO] 10.244.0.23:58183 - 7007 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00047367s
	[INFO] 10.244.0.23:56358 - 26808 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002454598s
	[INFO] 10.244.0.23:37519 - 11515 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000306136s
	[INFO] 10.244.0.23:56095 - 53118 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000073046s
	[INFO] 10.244.0.23:52826 - 17024 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000167726s
	[INFO] 10.244.0.23:58700 - 37913 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000072195s
	[INFO] 10.244.0.23:59320 - 25584 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001303055s
	[INFO] 10.244.0.23:59906 - 15774 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001635555s
	[INFO] 10.244.0.27:50450 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00056016s
	[INFO] 10.244.0.27:51006 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000141678s
	
	
	==> describe nodes <==
	Name:               addons-097644
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-097644
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743
	                    minikube.k8s.io/name=addons-097644
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T14_06_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-097644
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-097644"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 14:06:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-097644
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 14:15:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 14:10:30 +0000   Mon, 27 Jan 2025 14:06:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 14:10:30 +0000   Mon, 27 Jan 2025 14:06:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 14:10:30 +0000   Mon, 27 Jan 2025 14:06:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 14:10:30 +0000   Mon, 27 Jan 2025 14:06:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    addons-097644
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 53015ffc2749464aa9b7aa6eb16c09c0
	  System UUID:                53015ffc-2749-464a-a9b7-aa6eb16c09c0
	  Boot ID:                    b226972f-a6fa-415b-9827-3320ed4fb6de
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m49s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-nz5zf                     100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         9m16s
	  kube-system                 amd-gpu-device-plugin-89xv2                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m22s
	  kube-system                 coredns-668d6bf9bc-f5h88                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     9m25s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 csi-hostpathplugin-8jql5                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 etcd-addons-097644                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         9m30s
	  kube-system                 kube-apiserver-addons-097644                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m30s
	  kube-system                 kube-controller-manager-addons-097644                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m30s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m21s
	  kube-system                 kube-proxy-f4zwd                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m25s
	  kube-system                 kube-scheduler-addons-097644                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m30s
	  kube-system                 snapshot-controller-68b874b76f-bncpk                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 snapshot-controller-68b874b76f-pqf9k                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m19s
	  local-path-storage          helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m23s  kube-proxy       
	  Normal  Starting                 9m30s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m30s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m30s  kubelet          Node addons-097644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m30s  kubelet          Node addons-097644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m30s  kubelet          Node addons-097644 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m28s  kubelet          Node addons-097644 status is now: NodeReady
	  Normal  RegisteredNode           9m26s  node-controller  Node addons-097644 event: Registered Node addons-097644 in Controller
	
	
	==> dmesg <==
	[  +5.139758] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.148412] systemd-fstab-generator[1389]: Ignoring "noauto" option for root device
	[  +4.853975] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.047705] kauditd_printk_skb: 77 callbacks suppressed
	[  +5.422180] kauditd_printk_skb: 124 callbacks suppressed
	[Jan27 14:07] kauditd_printk_skb: 22 callbacks suppressed
	[ +10.435741] kauditd_printk_skb: 8 callbacks suppressed
	[ +16.990262] kauditd_printk_skb: 2 callbacks suppressed
	[Jan27 14:08] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.413265] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.243017] kauditd_printk_skb: 38 callbacks suppressed
	[Jan27 14:09] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.625061] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.938591] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.071460] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.141586] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.033258] kauditd_printk_skb: 36 callbacks suppressed
	[ +13.978501] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.866607] kauditd_printk_skb: 11 callbacks suppressed
	[Jan27 14:10] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.534292] kauditd_printk_skb: 3 callbacks suppressed
	[ +13.735780] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.706759] kauditd_printk_skb: 24 callbacks suppressed
	[Jan27 14:11] kauditd_printk_skb: 2 callbacks suppressed
	[Jan27 14:15] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [c5e0a45028148fd4e8391f8ec068ea1dc26781777b65595f0e77d40e5887b183] <==
	{"level":"info","ts":"2025-01-27T14:08:10.904125Z","caller":"traceutil/trace.go:171","msg":"trace[242124152] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1047; }","duration":"370.918398ms","start":"2025-01-27T14:08:10.533195Z","end":"2025-01-27T14:08:10.904114Z","steps":["trace[242124152] 'agreement among raft nodes before linearized reading'  (duration: 370.592929ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:08:10.904374Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T14:08:10.533183Z","time spent":"371.159759ms","remote":"127.0.0.1:48676","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-27T14:08:10.904722Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.976353ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:08:10.905648Z","caller":"traceutil/trace.go:171","msg":"trace[1424528345] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1047; }","duration":"221.920939ms","start":"2025-01-27T14:08:10.683718Z","end":"2025-01-27T14:08:10.905639Z","steps":["trace[1424528345] 'agreement among raft nodes before linearized reading'  (duration: 220.979628ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:08:10.904722Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"297.96691ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:08:10.906143Z","caller":"traceutil/trace.go:171","msg":"trace[918443162] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1047; }","duration":"299.34809ms","start":"2025-01-27T14:08:10.606727Z","end":"2025-01-27T14:08:10.906075Z","steps":["trace[918443162] 'agreement among raft nodes before linearized reading'  (duration: 297.968536ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:08:10.904817Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.575661ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:08:10.908832Z","caller":"traceutil/trace.go:171","msg":"trace[148756941] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1047; }","duration":"256.607672ms","start":"2025-01-27T14:08:10.652214Z","end":"2025-01-27T14:08:10.908821Z","steps":["trace[148756941] 'agreement among raft nodes before linearized reading'  (duration: 252.568435ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:08:19.448000Z","caller":"traceutil/trace.go:171","msg":"trace[656750930] linearizableReadLoop","detail":"{readStateIndex:1139; appliedIndex:1138; }","duration":"296.312018ms","start":"2025-01-27T14:08:19.151675Z","end":"2025-01-27T14:08:19.447987Z","steps":["trace[656750930] 'read index received'  (duration: 296.141594ms)","trace[656750930] 'applied index is now lower than readState.Index'  (duration: 169.942µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T14:08:19.448186Z","caller":"traceutil/trace.go:171","msg":"trace[868736163] transaction","detail":"{read_only:false; response_revision:1102; number_of_response:1; }","duration":"383.593879ms","start":"2025-01-27T14:08:19.064585Z","end":"2025-01-27T14:08:19.448179Z","steps":["trace[868736163] 'process raft request'  (duration: 383.321546ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:08:19.448344Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T14:08:19.064555Z","time spent":"383.668202ms","remote":"127.0.0.1:48734","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-097644\" mod_revision:1041 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-097644\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-097644\" > >"}
	{"level":"warn","ts":"2025-01-27T14:08:19.448623Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.485588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:08:19.449317Z","caller":"traceutil/trace.go:171","msg":"trace[684967347] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1102; }","duration":"266.209192ms","start":"2025-01-27T14:08:19.183097Z","end":"2025-01-27T14:08:19.449306Z","steps":["trace[684967347] 'agreement among raft nodes before linearized reading'  (duration: 265.481327ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:08:19.448655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"296.980294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:08:19.449472Z","caller":"traceutil/trace.go:171","msg":"trace[1855013821] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1102; }","duration":"297.811336ms","start":"2025-01-27T14:08:19.151651Z","end":"2025-01-27T14:08:19.449462Z","steps":["trace[1855013821] 'agreement among raft nodes before linearized reading'  (duration: 296.993016ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:09:00.558225Z","caller":"traceutil/trace.go:171","msg":"trace[1913945553] transaction","detail":"{read_only:false; response_revision:1172; number_of_response:1; }","duration":"241.852683ms","start":"2025-01-27T14:09:00.316354Z","end":"2025-01-27T14:09:00.558207Z","steps":["trace[1913945553] 'process raft request'  (duration: 241.733069ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:09:00.758982Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.118372ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:09:00.759127Z","caller":"traceutil/trace.go:171","msg":"trace[1498771159] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1172; }","duration":"109.340678ms","start":"2025-01-27T14:09:00.649774Z","end":"2025-01-27T14:09:00.759114Z","steps":["trace[1498771159] 'range keys from in-memory index tree'  (duration: 109.071803ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:09:46.406933Z","caller":"traceutil/trace.go:171","msg":"trace[1886057008] transaction","detail":"{read_only:false; response_revision:1428; number_of_response:1; }","duration":"194.14911ms","start":"2025-01-27T14:09:46.212756Z","end":"2025-01-27T14:09:46.406905Z","steps":["trace[1886057008] 'process raft request'  (duration: 193.987326ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:09:46.407441Z","caller":"traceutil/trace.go:171","msg":"trace[278748796] linearizableReadLoop","detail":"{readStateIndex:1488; appliedIndex:1488; }","duration":"179.099246ms","start":"2025-01-27T14:09:46.228323Z","end":"2025-01-27T14:09:46.407422Z","steps":["trace[278748796] 'read index received'  (duration: 179.093014ms)","trace[278748796] 'applied index is now lower than readState.Index'  (duration: 5.429µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T14:09:46.407629Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.267358ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab\" limit:1 ","response":"range_response_count:1 size:4006"}
	{"level":"info","ts":"2025-01-27T14:09:46.407673Z","caller":"traceutil/trace.go:171","msg":"trace[2015123404] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab; range_end:; response_count:1; response_revision:1428; }","duration":"179.426533ms","start":"2025-01-27T14:09:46.228236Z","end":"2025-01-27T14:09:46.407663Z","steps":["trace[2015123404] 'agreement among raft nodes before linearized reading'  (duration: 179.274245ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:10:20.372020Z","caller":"traceutil/trace.go:171","msg":"trace[308720478] linearizableReadLoop","detail":"{readStateIndex:1636; appliedIndex:1635; }","duration":"166.921538ms","start":"2025-01-27T14:10:20.205070Z","end":"2025-01-27T14:10:20.371992Z","steps":["trace[308720478] 'read index received'  (duration: 164.842263ms)","trace[308720478] 'applied index is now lower than readState.Index'  (duration: 2.078354ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T14:10:20.372181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.088702ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:10:20.372218Z","caller":"traceutil/trace.go:171","msg":"trace[543223298] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1566; }","duration":"167.165514ms","start":"2025-01-27T14:10:20.205047Z","end":"2025-01-27T14:10:20.372213Z","steps":["trace[543223298] 'agreement among raft nodes before linearized reading'  (duration: 167.085674ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:15:55 up 10 min,  0 users,  load average: 0.41, 0.59, 0.46
	Linux addons-097644 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [507cc4bfd4bacc7171e8b29ab1c7cf6755a54cd4e33644a7e010521389c99e19] <==
	I0127 14:06:38.603309       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 14:06:38.608746       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 14:06:39.196104       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.101.77.230"}
	I0127 14:06:39.247680       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.106.94.214"}
	I0127 14:06:39.310159       1 controller.go:615] quota admission added evaluator for: jobs.batch
	I0127 14:06:40.278552       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.103.89.29"}
	I0127 14:06:40.295443       1 controller.go:615] quota admission added evaluator for: statefulsets.apps
	I0127 14:06:40.551299       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.102.238.108"}
	I0127 14:06:43.094310       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.103.54.150"}
	W0127 14:07:12.374826       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:07:12.375556       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0127 14:07:12.376214       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.247.116:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.247.116:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.247.116:443: connect: connection refused" logger="UnhandledError"
	E0127 14:07:12.378540       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.247.116:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.247.116:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.247.116:443: connect: connection refused" logger="UnhandledError"
	I0127 14:07:12.446208       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0127 14:09:14.350334       1 conn.go:339] Error on socket receive: read tcp 192.168.39.228:8443->192.168.39.1:50822: use of closed network connection
	E0127 14:09:14.546345       1 conn.go:339] Error on socket receive: read tcp 192.168.39.228:8443->192.168.39.1:50860: use of closed network connection
	I0127 14:09:23.868341       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.143.116"}
	I0127 14:10:08.420199       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0127 14:10:09.465650       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0127 14:10:13.397817       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0127 14:10:13.989804       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0127 14:10:14.197521       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.220.0"}
	
	
	==> kube-controller-manager [ca97beecbf34e8cd79fb655f7ac780b4687619a6508d22de1fae488cf0049312] <==
	W0127 14:14:50.004003       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 14:14:50.005706       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0127 14:14:50.006720       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 14:14:50.006780       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0127 14:14:59.582988       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0127 14:15:14.583470       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0127 14:15:15.349161       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0127 14:15:15.577892       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0127 14:15:15.704877       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0127 14:15:15.979983       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0127 14:15:16.227645       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0127 14:15:16.423112       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0127 14:15:16.722007       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0127 14:15:17.194208       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0127 14:15:18.088386       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0127 14:15:19.506007       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0127 14:15:22.209950       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0127 14:15:27.452641       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	W0127 14:15:27.848068       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 14:15:27.849021       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0127 14:15:27.849703       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 14:15:27.849737       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0127 14:15:29.583760       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E0127 14:15:37.809945       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E0127 14:15:44.583956       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	
	
	==> kube-proxy [f90efac6917c68038d63a14118f3bb156f497e40e9ad17243eb081ca38d088f5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 14:06:31.963275       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 14:06:31.979022       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.228"]
	E0127 14:06:31.979136       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 14:06:32.077913       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 14:06:32.077966       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 14:06:32.077989       1 server_linux.go:170] "Using iptables Proxier"
	I0127 14:06:32.084140       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 14:06:32.085000       1 server.go:497] "Version info" version="v1.32.1"
	I0127 14:06:32.085035       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:06:32.100525       1 config.go:199] "Starting service config controller"
	I0127 14:06:32.100558       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 14:06:32.100585       1 config.go:105] "Starting endpoint slice config controller"
	I0127 14:06:32.100589       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 14:06:32.101170       1 config.go:329] "Starting node config controller"
	I0127 14:06:32.101178       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 14:06:32.200914       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 14:06:32.200982       1 shared_informer.go:320] Caches are synced for service config
	I0127 14:06:32.201769       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [726cfe5819ce466e645fc14abb234442552bd6186cbb1f21a5e46f21de73c975] <==
	W0127 14:06:23.028289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 14:06:23.028514       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:23.028258       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 14:06:23.028527       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:23.832298       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 14:06:23.832354       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:23.890209       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 14:06:23.890242       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:23.952607       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 14:06:23.952764       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:24.012969       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 14:06:24.013220       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:24.013000       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 14:06:24.013543       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 14:06:24.051624       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 14:06:24.051685       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:24.102044       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 14:06:24.102173       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:24.130067       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 14:06:24.130122       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:24.176207       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 14:06:24.176269       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:24.284632       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 14:06:24.284687       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 14:06:26.404586       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 14:15:06 addons-097644 kubelet[1230]: E0127 14:15:06.162024    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987306161260998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:15:06 addons-097644 kubelet[1230]: E0127 14:15:06.162170    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987306161260998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:15:13 addons-097644 kubelet[1230]: I0127 14:15:13.803604    1230 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jan 27 14:15:15 addons-097644 kubelet[1230]: E0127 14:15:15.809383    1230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="8832c7fb-d1d2-4a01-8fb6-65e44ed2a850"
	Jan 27 14:15:16 addons-097644 kubelet[1230]: E0127 14:15:16.166098    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987316165115057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:15:16 addons-097644 kubelet[1230]: E0127 14:15:16.166199    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987316165115057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:15:17 addons-097644 kubelet[1230]: E0127 14:15:17.971771    1230 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Jan 27 14:15:17 addons-097644 kubelet[1230]: E0127 14:15:17.972252    1230 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Jan 27 14:15:17 addons-097644 kubelet[1230]: E0127 14:15:17.974058    1230 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hck28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx_defaul
t(e1bbc3eb-e3d8-4361-986a-7836ef9e6bac): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jan 27 14:15:17 addons-097644 kubelet[1230]: E0127 14:15:17.975471    1230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e1bbc3eb-e3d8-4361-986a-7836ef9e6bac"
	Jan 27 14:15:25 addons-097644 kubelet[1230]: E0127 14:15:25.826511    1230 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 14:15:25 addons-097644 kubelet[1230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 14:15:25 addons-097644 kubelet[1230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 14:15:25 addons-097644 kubelet[1230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 14:15:25 addons-097644 kubelet[1230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 14:15:26 addons-097644 kubelet[1230]: E0127 14:15:26.169309    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987326168907454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:15:26 addons-097644 kubelet[1230]: E0127 14:15:26.169360    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987326168907454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:15:28 addons-097644 kubelet[1230]: E0127 14:15:28.806270    1230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e1bbc3eb-e3d8-4361-986a-7836ef9e6bac"
	Jan 27 14:15:36 addons-097644 kubelet[1230]: E0127 14:15:36.172469    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987336172095529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:15:36 addons-097644 kubelet[1230]: E0127 14:15:36.172962    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987336172095529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:15:39 addons-097644 kubelet[1230]: E0127 14:15:39.807118    1230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e1bbc3eb-e3d8-4361-986a-7836ef9e6bac"
	Jan 27 14:15:46 addons-097644 kubelet[1230]: E0127 14:15:46.176380    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987346175769285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:15:46 addons-097644 kubelet[1230]: E0127 14:15:46.176482    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987346175769285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:15:46 addons-097644 kubelet[1230]: I0127 14:15:46.803749    1230 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-89xv2" secret="" err="secret \"gcp-auth\" not found"
	Jan 27 14:15:52 addons-097644 kubelet[1230]: E0127 14:15:52.805030    1230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e1bbc3eb-e3d8-4361-986a-7836ef9e6bac"
	
	
	==> storage-provisioner [d33c8ab68a095e3f9a19ef0816cd7f0039760858f3eb4b4e8be7a466c8a3b5f2] <==
	I0127 14:06:41.758709       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 14:06:41.803907       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 14:06:41.804042       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 14:06:41.825628       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 14:06:41.825800       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-097644_e66b6578-730a-4596-bb9a-5061db442ad5!
	I0127 14:06:41.826617       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"798f666d-0618-4e6e-9910-6786e4bc55d6", APIVersion:"v1", ResourceVersion:"778", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-097644_e66b6578-730a-4596-bb9a-5061db442ad5 became leader
	I0127 14:06:41.926306       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-097644_e66b6578-730a-4596-bb9a-5061db442ad5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-097644 -n addons-097644
helpers_test.go:261: (dbg) Run:  kubectl --context addons-097644 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-k6p8j ingress-nginx-admission-patch-bzwfx helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-097644 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-k6p8j ingress-nginx-admission-patch-bzwfx helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-097644 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-k6p8j ingress-nginx-admission-patch-bzwfx helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab: exit status 1 (81.519876ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-097644/192.168.39.228
	Start Time:       Mon, 27 Jan 2025 14:10:14 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hck28 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hck28:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m42s                  default-scheduler  Successfully assigned default/nginx to addons-097644
	  Normal   Pulling    2m17s (x3 over 5m42s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     39s (x3 over 4m27s)    kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     39s (x3 over 4m27s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4s (x5 over 4m27s)     kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4s (x5 over 4m27s)     kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-097644/192.168.39.228
	Start Time:       Mon, 27 Jan 2025 14:09:54 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9vdzn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-9vdzn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-097644
	  Warning  Failed     69s (x3 over 4m58s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     69s (x3 over 4m58s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    41s (x4 over 4m57s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     41s (x4 over 4m57s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    26s (x4 over 6m2s)   kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xj65w (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-xj65w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-k6p8j" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bzwfx" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-097644 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-k6p8j ingress-nginx-admission-patch-bzwfx helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-097644 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-097644 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-097644 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.86824873s)
--- FAIL: TestAddons/parallel/CSI (388.77s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (425.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-097644 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-097644 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-097644 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (880ns)
helpers_test.go:396: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:899: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-097644 -n addons-097644
helpers_test.go:244: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-097644 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-097644 logs -n 25: (1.417721501s)
helpers_test.go:252: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-671066 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | -p download-only-671066              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| delete  | -p download-only-671066              | download-only-671066 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| start   | -o=json --download-only              | download-only-223205 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | -p download-only-223205              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| delete  | -p download-only-223205              | download-only-223205 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| delete  | -p download-only-671066              | download-only-671066 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| delete  | -p download-only-223205              | download-only-223205 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| start   | --download-only -p                   | binary-mirror-105715 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | binary-mirror-105715                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46267               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-105715              | binary-mirror-105715 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| addons  | enable dashboard -p                  | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | addons-097644                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | addons-097644                        |                      |         |         |                     |                     |
	| start   | -p addons-097644 --wait=true         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:09 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	| addons  | addons-097644 addons disable         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:09 UTC |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-097644 addons disable         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:09 UTC |
	|         | gcp-auth --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:09 UTC |
	|         | -p addons-097644                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-097644 addons                 | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:09 UTC |
	|         | disable nvidia-device-plugin         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-097644 addons                 | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:09 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-097644 addons                 | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:09 UTC |
	|         | disable cloud-spanner                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-097644 addons disable         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:09 UTC | 27 Jan 25 14:10 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-097644 addons                 | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:10 UTC | 27 Jan 25 14:10 UTC |
	|         | disable inspektor-gadget             |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| ip      | addons-097644 ip                     | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:10 UTC | 27 Jan 25 14:10 UTC |
	| addons  | addons-097644 addons disable         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:10 UTC | 27 Jan 25 14:10 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-097644 addons disable         | addons-097644        | jenkins | v1.35.0 | 27 Jan 25 14:10 UTC | 27 Jan 25 14:10 UTC |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:05:43
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:05:43.780693 1013451 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:05:43.780813 1013451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:05:43.780825 1013451 out.go:358] Setting ErrFile to fd 2...
	I0127 14:05:43.780832 1013451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:05:43.781030 1013451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 14:05:43.781664 1013451 out.go:352] Setting JSON to false
	I0127 14:05:43.782666 1013451 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":17291,"bootTime":1737969453,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:05:43.782784 1013451 start.go:139] virtualization: kvm guest
	I0127 14:05:43.784893 1013451 out.go:177] * [addons-097644] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:05:43.787056 1013451 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 14:05:43.787061 1013451 notify.go:220] Checking for updates...
	I0127 14:05:43.789034 1013451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:05:43.790539 1013451 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 14:05:43.791834 1013451 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 14:05:43.792947 1013451 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:05:43.794209 1013451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:05:43.795600 1013451 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:05:43.828945 1013451 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 14:05:43.830536 1013451 start.go:297] selected driver: kvm2
	I0127 14:05:43.830549 1013451 start.go:901] validating driver "kvm2" against <nil>
	I0127 14:05:43.830562 1013451 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:05:43.831266 1013451 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:05:43.831371 1013451 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:05:43.846805 1013451 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:05:43.846858 1013451 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 14:05:43.847096 1013451 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:05:43.847130 1013451 cni.go:84] Creating CNI manager for ""
	I0127 14:05:43.847177 1013451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:05:43.847185 1013451 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 14:05:43.847240 1013451 start.go:340] cluster config:
	{Name:addons-097644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-097644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0127 14:05:43.847356 1013451 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:05:43.849197 1013451 out.go:177] * Starting "addons-097644" primary control-plane node in "addons-097644" cluster
	I0127 14:05:43.850425 1013451 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:05:43.850456 1013451 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 14:05:43.850465 1013451 cache.go:56] Caching tarball of preloaded images
	I0127 14:05:43.850551 1013451 preload.go:172] Found /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 14:05:43.850561 1013451 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 14:05:43.850859 1013451 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/config.json ...
	I0127 14:05:43.850881 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/config.json: {Name:mkf76d9208747a70ff9df6e74ebaa16aff66d9f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:05:43.851032 1013451 start.go:360] acquireMachinesLock for addons-097644: {Name:mk884e36253ca066a698970989f20649e5f9cbef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 14:05:43.851095 1013451 start.go:364] duration metric: took 44.724µs to acquireMachinesLock for "addons-097644"
	I0127 14:05:43.851120 1013451 start.go:93] Provisioning new machine with config: &{Name:addons-097644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-097644 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:05:43.851186 1013451 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 14:05:43.852924 1013451 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0127 14:05:43.853096 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:05:43.853162 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:05:43.867886 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42303
	I0127 14:05:43.868410 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:05:43.868979 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:05:43.869040 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:05:43.869524 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:05:43.869744 1013451 main.go:141] libmachine: (addons-097644) Calling .GetMachineName
	I0127 14:05:43.869931 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:05:43.870113 1013451 start.go:159] libmachine.API.Create for "addons-097644" (driver="kvm2")
	I0127 14:05:43.870140 1013451 client.go:168] LocalClient.Create starting
	I0127 14:05:43.870192 1013451 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem
	I0127 14:05:43.971967 1013451 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem
	I0127 14:05:44.102745 1013451 main.go:141] libmachine: Running pre-create checks...
	I0127 14:05:44.102770 1013451 main.go:141] libmachine: (addons-097644) Calling .PreCreateCheck
	I0127 14:05:44.103352 1013451 main.go:141] libmachine: (addons-097644) Calling .GetConfigRaw
	I0127 14:05:44.103882 1013451 main.go:141] libmachine: Creating machine...
	I0127 14:05:44.103898 1013451 main.go:141] libmachine: (addons-097644) Calling .Create
	I0127 14:05:44.104114 1013451 main.go:141] libmachine: (addons-097644) creating KVM machine...
	I0127 14:05:44.104136 1013451 main.go:141] libmachine: (addons-097644) creating network...
	I0127 14:05:44.105430 1013451 main.go:141] libmachine: (addons-097644) DBG | found existing default KVM network
	I0127 14:05:44.106433 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:44.106217 1013473 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123ba0}
	I0127 14:05:44.106460 1013451 main.go:141] libmachine: (addons-097644) DBG | created network xml: 
	I0127 14:05:44.106474 1013451 main.go:141] libmachine: (addons-097644) DBG | <network>
	I0127 14:05:44.106506 1013451 main.go:141] libmachine: (addons-097644) DBG |   <name>mk-addons-097644</name>
	I0127 14:05:44.106520 1013451 main.go:141] libmachine: (addons-097644) DBG |   <dns enable='no'/>
	I0127 14:05:44.106527 1013451 main.go:141] libmachine: (addons-097644) DBG |   
	I0127 14:05:44.106538 1013451 main.go:141] libmachine: (addons-097644) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0127 14:05:44.106549 1013451 main.go:141] libmachine: (addons-097644) DBG |     <dhcp>
	I0127 14:05:44.106558 1013451 main.go:141] libmachine: (addons-097644) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0127 14:05:44.106566 1013451 main.go:141] libmachine: (addons-097644) DBG |     </dhcp>
	I0127 14:05:44.106585 1013451 main.go:141] libmachine: (addons-097644) DBG |   </ip>
	I0127 14:05:44.106598 1013451 main.go:141] libmachine: (addons-097644) DBG |   
	I0127 14:05:44.106608 1013451 main.go:141] libmachine: (addons-097644) DBG | </network>
	I0127 14:05:44.106620 1013451 main.go:141] libmachine: (addons-097644) DBG | 
	I0127 14:05:44.112205 1013451 main.go:141] libmachine: (addons-097644) DBG | trying to create private KVM network mk-addons-097644 192.168.39.0/24...
	I0127 14:05:44.180056 1013451 main.go:141] libmachine: (addons-097644) DBG | private KVM network mk-addons-097644 192.168.39.0/24 created
	I0127 14:05:44.180144 1013451 main.go:141] libmachine: (addons-097644) setting up store path in /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644 ...
	I0127 14:05:44.180171 1013451 main.go:141] libmachine: (addons-097644) building disk image from file:///home/jenkins/minikube-integration/20321-1005652/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 14:05:44.180189 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:44.180124 1013473 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 14:05:44.180396 1013451 main.go:141] libmachine: (addons-097644) Downloading /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20321-1005652/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 14:05:44.489532 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:44.489354 1013473 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa...
	I0127 14:05:44.674691 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:44.674507 1013473 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/addons-097644.rawdisk...
	I0127 14:05:44.674726 1013451 main.go:141] libmachine: (addons-097644) DBG | Writing magic tar header
	I0127 14:05:44.674736 1013451 main.go:141] libmachine: (addons-097644) DBG | Writing SSH key tar header
	I0127 14:05:44.674747 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:44.674662 1013473 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644 ...
	I0127 14:05:44.674836 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644
	I0127 14:05:44.674866 1013451 main.go:141] libmachine: (addons-097644) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644 (perms=drwx------)
	I0127 14:05:44.674877 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines
	I0127 14:05:44.674890 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 14:05:44.674897 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652
	I0127 14:05:44.674908 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 14:05:44.674915 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home/jenkins
	I0127 14:05:44.674926 1013451 main.go:141] libmachine: (addons-097644) DBG | checking permissions on dir: /home
	I0127 14:05:44.674933 1013451 main.go:141] libmachine: (addons-097644) DBG | skipping /home - not owner
	I0127 14:05:44.674963 1013451 main.go:141] libmachine: (addons-097644) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines (perms=drwxr-xr-x)
	I0127 14:05:44.674987 1013451 main.go:141] libmachine: (addons-097644) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube (perms=drwxr-xr-x)
	I0127 14:05:44.675015 1013451 main.go:141] libmachine: (addons-097644) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652 (perms=drwxrwxr-x)
	I0127 14:05:44.675025 1013451 main.go:141] libmachine: (addons-097644) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 14:05:44.675035 1013451 main.go:141] libmachine: (addons-097644) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 14:05:44.675040 1013451 main.go:141] libmachine: (addons-097644) creating domain...
	I0127 14:05:44.676087 1013451 main.go:141] libmachine: (addons-097644) define libvirt domain using xml: 
	I0127 14:05:44.676112 1013451 main.go:141] libmachine: (addons-097644) <domain type='kvm'>
	I0127 14:05:44.676119 1013451 main.go:141] libmachine: (addons-097644)   <name>addons-097644</name>
	I0127 14:05:44.676125 1013451 main.go:141] libmachine: (addons-097644)   <memory unit='MiB'>4000</memory>
	I0127 14:05:44.676133 1013451 main.go:141] libmachine: (addons-097644)   <vcpu>2</vcpu>
	I0127 14:05:44.676142 1013451 main.go:141] libmachine: (addons-097644)   <features>
	I0127 14:05:44.676170 1013451 main.go:141] libmachine: (addons-097644)     <acpi/>
	I0127 14:05:44.676190 1013451 main.go:141] libmachine: (addons-097644)     <apic/>
	I0127 14:05:44.676198 1013451 main.go:141] libmachine: (addons-097644)     <pae/>
	I0127 14:05:44.676204 1013451 main.go:141] libmachine: (addons-097644)     
	I0127 14:05:44.676219 1013451 main.go:141] libmachine: (addons-097644)   </features>
	I0127 14:05:44.676234 1013451 main.go:141] libmachine: (addons-097644)   <cpu mode='host-passthrough'>
	I0127 14:05:44.676256 1013451 main.go:141] libmachine: (addons-097644)   
	I0127 14:05:44.676274 1013451 main.go:141] libmachine: (addons-097644)   </cpu>
	I0127 14:05:44.676285 1013451 main.go:141] libmachine: (addons-097644)   <os>
	I0127 14:05:44.676290 1013451 main.go:141] libmachine: (addons-097644)     <type>hvm</type>
	I0127 14:05:44.676295 1013451 main.go:141] libmachine: (addons-097644)     <boot dev='cdrom'/>
	I0127 14:05:44.676302 1013451 main.go:141] libmachine: (addons-097644)     <boot dev='hd'/>
	I0127 14:05:44.676329 1013451 main.go:141] libmachine: (addons-097644)     <bootmenu enable='no'/>
	I0127 14:05:44.676352 1013451 main.go:141] libmachine: (addons-097644)   </os>
	I0127 14:05:44.676365 1013451 main.go:141] libmachine: (addons-097644)   <devices>
	I0127 14:05:44.676382 1013451 main.go:141] libmachine: (addons-097644)     <disk type='file' device='cdrom'>
	I0127 14:05:44.676400 1013451 main.go:141] libmachine: (addons-097644)       <source file='/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/boot2docker.iso'/>
	I0127 14:05:44.676411 1013451 main.go:141] libmachine: (addons-097644)       <target dev='hdc' bus='scsi'/>
	I0127 14:05:44.676436 1013451 main.go:141] libmachine: (addons-097644)       <readonly/>
	I0127 14:05:44.676446 1013451 main.go:141] libmachine: (addons-097644)     </disk>
	I0127 14:05:44.676457 1013451 main.go:141] libmachine: (addons-097644)     <disk type='file' device='disk'>
	I0127 14:05:44.676474 1013451 main.go:141] libmachine: (addons-097644)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 14:05:44.676491 1013451 main.go:141] libmachine: (addons-097644)       <source file='/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/addons-097644.rawdisk'/>
	I0127 14:05:44.676503 1013451 main.go:141] libmachine: (addons-097644)       <target dev='hda' bus='virtio'/>
	I0127 14:05:44.676512 1013451 main.go:141] libmachine: (addons-097644)     </disk>
	I0127 14:05:44.676523 1013451 main.go:141] libmachine: (addons-097644)     <interface type='network'>
	I0127 14:05:44.676535 1013451 main.go:141] libmachine: (addons-097644)       <source network='mk-addons-097644'/>
	I0127 14:05:44.676543 1013451 main.go:141] libmachine: (addons-097644)       <model type='virtio'/>
	I0127 14:05:44.676554 1013451 main.go:141] libmachine: (addons-097644)     </interface>
	I0127 14:05:44.676567 1013451 main.go:141] libmachine: (addons-097644)     <interface type='network'>
	I0127 14:05:44.676577 1013451 main.go:141] libmachine: (addons-097644)       <source network='default'/>
	I0127 14:05:44.676588 1013451 main.go:141] libmachine: (addons-097644)       <model type='virtio'/>
	I0127 14:05:44.676597 1013451 main.go:141] libmachine: (addons-097644)     </interface>
	I0127 14:05:44.676607 1013451 main.go:141] libmachine: (addons-097644)     <serial type='pty'>
	I0127 14:05:44.676615 1013451 main.go:141] libmachine: (addons-097644)       <target port='0'/>
	I0127 14:05:44.676624 1013451 main.go:141] libmachine: (addons-097644)     </serial>
	I0127 14:05:44.676638 1013451 main.go:141] libmachine: (addons-097644)     <console type='pty'>
	I0127 14:05:44.676650 1013451 main.go:141] libmachine: (addons-097644)       <target type='serial' port='0'/>
	I0127 14:05:44.676666 1013451 main.go:141] libmachine: (addons-097644)     </console>
	I0127 14:05:44.676678 1013451 main.go:141] libmachine: (addons-097644)     <rng model='virtio'>
	I0127 14:05:44.676688 1013451 main.go:141] libmachine: (addons-097644)       <backend model='random'>/dev/random</backend>
	I0127 14:05:44.676695 1013451 main.go:141] libmachine: (addons-097644)     </rng>
	I0127 14:05:44.676702 1013451 main.go:141] libmachine: (addons-097644)     
	I0127 14:05:44.676711 1013451 main.go:141] libmachine: (addons-097644)     
	I0127 14:05:44.676720 1013451 main.go:141] libmachine: (addons-097644)   </devices>
	I0127 14:05:44.676726 1013451 main.go:141] libmachine: (addons-097644) </domain>
	I0127 14:05:44.676788 1013451 main.go:141] libmachine: (addons-097644) 
	I0127 14:05:44.681531 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:bc:17:24 in network default
	I0127 14:05:44.682103 1013451 main.go:141] libmachine: (addons-097644) starting domain...
	I0127 14:05:44.682120 1013451 main.go:141] libmachine: (addons-097644) ensuring networks are active...
	I0127 14:05:44.682127 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:44.682898 1013451 main.go:141] libmachine: (addons-097644) Ensuring network default is active
	I0127 14:05:44.683272 1013451 main.go:141] libmachine: (addons-097644) Ensuring network mk-addons-097644 is active
	I0127 14:05:44.683705 1013451 main.go:141] libmachine: (addons-097644) getting domain XML...
	I0127 14:05:44.684437 1013451 main.go:141] libmachine: (addons-097644) creating domain...
	I0127 14:05:45.896162 1013451 main.go:141] libmachine: (addons-097644) waiting for IP...
	I0127 14:05:45.896892 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:45.897344 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:45.897436 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:45.897354 1013473 retry.go:31] will retry after 236.581088ms: waiting for domain to come up
	I0127 14:05:46.135836 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:46.136377 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:46.136409 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:46.136324 1013473 retry.go:31] will retry after 316.29449ms: waiting for domain to come up
	I0127 14:05:46.454651 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:46.455132 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:46.455160 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:46.455064 1013473 retry.go:31] will retry after 470.066632ms: waiting for domain to come up
	I0127 14:05:46.926708 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:46.927233 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:46.927260 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:46.927215 1013473 retry.go:31] will retry after 394.465051ms: waiting for domain to come up
	I0127 14:05:47.322830 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:47.323381 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:47.323413 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:47.323322 1013473 retry.go:31] will retry after 512.0087ms: waiting for domain to come up
	I0127 14:05:47.837180 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:47.837627 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:47.837654 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:47.837597 1013473 retry.go:31] will retry after 602.684619ms: waiting for domain to come up
	I0127 14:05:48.441447 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:48.441865 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:48.441895 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:48.441834 1013473 retry.go:31] will retry after 1.057148427s: waiting for domain to come up
	I0127 14:05:49.501034 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:49.501504 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:49.501527 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:49.501455 1013473 retry.go:31] will retry after 1.147761253s: waiting for domain to come up
	I0127 14:05:50.651314 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:50.651817 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:50.651882 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:50.651766 1013473 retry.go:31] will retry after 1.445396149s: waiting for domain to come up
	I0127 14:05:52.098809 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:52.099216 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:52.099250 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:52.099170 1013473 retry.go:31] will retry after 2.075111556s: waiting for domain to come up
	I0127 14:05:54.175631 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:54.176081 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:54.176131 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:54.176071 1013473 retry.go:31] will retry after 1.984245215s: waiting for domain to come up
	I0127 14:05:56.163386 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:56.163785 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:56.163814 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:56.163743 1013473 retry.go:31] will retry after 2.265903927s: waiting for domain to come up
	I0127 14:05:58.432199 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:05:58.432532 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:05:58.432610 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:05:58.432499 1013473 retry.go:31] will retry after 4.367217291s: waiting for domain to come up
	I0127 14:06:02.802210 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:02.802571 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find current IP address of domain addons-097644 in network mk-addons-097644
	I0127 14:06:02.802600 1013451 main.go:141] libmachine: (addons-097644) DBG | I0127 14:06:02.802549 1013473 retry.go:31] will retry after 3.598012851s: waiting for domain to come up
	I0127 14:06:06.403574 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.404009 1013451 main.go:141] libmachine: (addons-097644) found domain IP: 192.168.39.228
	I0127 14:06:06.404030 1013451 main.go:141] libmachine: (addons-097644) reserving static IP address...
	I0127 14:06:06.404042 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has current primary IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.404496 1013451 main.go:141] libmachine: (addons-097644) DBG | unable to find host DHCP lease matching {name: "addons-097644", mac: "52:54:00:9d:d4:27", ip: "192.168.39.228"} in network mk-addons-097644
	I0127 14:06:06.482117 1013451 main.go:141] libmachine: (addons-097644) reserved static IP address 192.168.39.228 for domain addons-097644
	I0127 14:06:06.482150 1013451 main.go:141] libmachine: (addons-097644) DBG | Getting to WaitForSSH function...
	I0127 14:06:06.482159 1013451 main.go:141] libmachine: (addons-097644) waiting for SSH...
	I0127 14:06:06.484542 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.484916 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:06.484946 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.485093 1013451 main.go:141] libmachine: (addons-097644) DBG | Using SSH client type: external
	I0127 14:06:06.485123 1013451 main.go:141] libmachine: (addons-097644) DBG | Using SSH private key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa (-rw-------)
	I0127 14:06:06.485171 1013451 main.go:141] libmachine: (addons-097644) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 14:06:06.485189 1013451 main.go:141] libmachine: (addons-097644) DBG | About to run SSH command:
	I0127 14:06:06.485232 1013451 main.go:141] libmachine: (addons-097644) DBG | exit 0
	I0127 14:06:06.609772 1013451 main.go:141] libmachine: (addons-097644) DBG | SSH cmd err, output: <nil>: 
	I0127 14:06:06.610069 1013451 main.go:141] libmachine: (addons-097644) KVM machine creation complete
	I0127 14:06:06.610555 1013451 main.go:141] libmachine: (addons-097644) Calling .GetConfigRaw
	I0127 14:06:06.611165 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:06.611373 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:06.611586 1013451 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 14:06:06.611621 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:06.613057 1013451 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 14:06:06.613073 1013451 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 14:06:06.613081 1013451 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 14:06:06.613090 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:06.615644 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.616035 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:06.616063 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.616199 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:06.616362 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.616508 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.616657 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:06.616824 1013451 main.go:141] libmachine: Using SSH client type: native
	I0127 14:06:06.617054 1013451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 14:06:06.617068 1013451 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 14:06:06.716630 1013451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:06:06.716673 1013451 main.go:141] libmachine: Detecting the provisioner...
	I0127 14:06:06.716681 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:06.719631 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.719945 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:06.719967 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.720264 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:06.720503 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.720685 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.720841 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:06.721000 1013451 main.go:141] libmachine: Using SSH client type: native
	I0127 14:06:06.721236 1013451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 14:06:06.721251 1013451 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 14:06:06.826035 1013451 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 14:06:06.826137 1013451 main.go:141] libmachine: found compatible host: buildroot
	I0127 14:06:06.826152 1013451 main.go:141] libmachine: Provisioning with buildroot...
	I0127 14:06:06.826166 1013451 main.go:141] libmachine: (addons-097644) Calling .GetMachineName
	I0127 14:06:06.826460 1013451 buildroot.go:166] provisioning hostname "addons-097644"
	I0127 14:06:06.826496 1013451 main.go:141] libmachine: (addons-097644) Calling .GetMachineName
	I0127 14:06:06.826730 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:06.829265 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.829710 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:06.829746 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.829916 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:06.830136 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.830299 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.830442 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:06.830601 1013451 main.go:141] libmachine: Using SSH client type: native
	I0127 14:06:06.830779 1013451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 14:06:06.830790 1013451 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-097644 && echo "addons-097644" | sudo tee /etc/hostname
	I0127 14:06:06.943475 1013451 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-097644
	
	I0127 14:06:06.943511 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:06.946454 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.946884 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:06.946916 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:06.947078 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:06.947278 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.947449 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:06.947589 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:06.947760 1013451 main.go:141] libmachine: Using SSH client type: native
	I0127 14:06:06.947980 1013451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 14:06:06.948004 1013451 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-097644' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-097644/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-097644' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 14:06:07.054387 1013451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 14:06:07.054446 1013451 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20321-1005652/.minikube CaCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20321-1005652/.minikube}
	I0127 14:06:07.054503 1013451 buildroot.go:174] setting up certificates
	I0127 14:06:07.054527 1013451 provision.go:84] configureAuth start
	I0127 14:06:07.054547 1013451 main.go:141] libmachine: (addons-097644) Calling .GetMachineName
	I0127 14:06:07.054845 1013451 main.go:141] libmachine: (addons-097644) Calling .GetIP
	I0127 14:06:07.057428 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.057824 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.057852 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.057989 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.060187 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.060520 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.060546 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.060713 1013451 provision.go:143] copyHostCerts
	I0127 14:06:07.060793 1013451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem (1078 bytes)
	I0127 14:06:07.060906 1013451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem (1123 bytes)
	I0127 14:06:07.060974 1013451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem (1679 bytes)
	I0127 14:06:07.061053 1013451 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem org=jenkins.addons-097644 san=[127.0.0.1 192.168.39.228 addons-097644 localhost minikube]
	I0127 14:06:07.171259 1013451 provision.go:177] copyRemoteCerts
	I0127 14:06:07.171332 1013451 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 14:06:07.171359 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.173936 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.174300 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.174345 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.174507 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:07.174718 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.174901 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:07.175049 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:07.256072 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 14:06:07.280263 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 14:06:07.304563 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 14:06:07.328463 1013451 provision.go:87] duration metric: took 273.91293ms to configureAuth
	I0127 14:06:07.328503 1013451 buildroot.go:189] setting minikube options for container-runtime
	I0127 14:06:07.328710 1013451 config.go:182] Loaded profile config "addons-097644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:06:07.328812 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.331515 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.331824 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.331855 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.332095 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:07.332304 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.332494 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.332664 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:07.332827 1013451 main.go:141] libmachine: Using SSH client type: native
	I0127 14:06:07.333034 1013451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 14:06:07.333056 1013451 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 14:06:07.551437 1013451 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 14:06:07.551470 1013451 main.go:141] libmachine: Checking connection to Docker...
	I0127 14:06:07.551481 1013451 main.go:141] libmachine: (addons-097644) Calling .GetURL
	I0127 14:06:07.552717 1013451 main.go:141] libmachine: (addons-097644) DBG | using libvirt version 6000000
	I0127 14:06:07.554862 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.555265 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.555309 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.555465 1013451 main.go:141] libmachine: Docker is up and running!
	I0127 14:06:07.555482 1013451 main.go:141] libmachine: Reticulating splines...
	I0127 14:06:07.555493 1013451 client.go:171] duration metric: took 23.685342954s to LocalClient.Create
	I0127 14:06:07.555525 1013451 start.go:167] duration metric: took 23.68541238s to libmachine.API.Create "addons-097644"
	I0127 14:06:07.555552 1013451 start.go:293] postStartSetup for "addons-097644" (driver="kvm2")
	I0127 14:06:07.555570 1013451 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 14:06:07.555596 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:07.555863 1013451 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 14:06:07.555889 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.557878 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.558160 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.558198 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.558312 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:07.558488 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.558664 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:07.558817 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:07.640270 1013451 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 14:06:07.644537 1013451 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 14:06:07.644585 1013451 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/addons for local assets ...
	I0127 14:06:07.644664 1013451 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/files for local assets ...
	I0127 14:06:07.644692 1013451 start.go:296] duration metric: took 89.13009ms for postStartSetup
	I0127 14:06:07.644732 1013451 main.go:141] libmachine: (addons-097644) Calling .GetConfigRaw
	I0127 14:06:07.645370 1013451 main.go:141] libmachine: (addons-097644) Calling .GetIP
	I0127 14:06:07.648039 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.648405 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.648434 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.648695 1013451 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/config.json ...
	I0127 14:06:07.648902 1013451 start.go:128] duration metric: took 23.797703895s to createHost
	I0127 14:06:07.648927 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.651100 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.651434 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.651481 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.651607 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:07.651822 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.651975 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.652136 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:07.652325 1013451 main.go:141] libmachine: Using SSH client type: native
	I0127 14:06:07.652538 1013451 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0127 14:06:07.652554 1013451 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 14:06:07.750310 1013451 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737986767.722256723
	
	I0127 14:06:07.750337 1013451 fix.go:216] guest clock: 1737986767.722256723
	I0127 14:06:07.750344 1013451 fix.go:229] Guest: 2025-01-27 14:06:07.722256723 +0000 UTC Remote: 2025-01-27 14:06:07.648915936 +0000 UTC m=+23.906997834 (delta=73.340787ms)
	I0127 14:06:07.750387 1013451 fix.go:200] guest clock delta is within tolerance: 73.340787ms
	I0127 14:06:07.750393 1013451 start.go:83] releasing machines lock for "addons-097644", held for 23.899285781s
	I0127 14:06:07.750420 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:07.750687 1013451 main.go:141] libmachine: (addons-097644) Calling .GetIP
	I0127 14:06:07.753394 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.753884 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.753910 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.754016 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:07.754573 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:07.754725 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:07.754834 1013451 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 14:06:07.754900 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.754942 1013451 ssh_runner.go:195] Run: cat /version.json
	I0127 14:06:07.754971 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:07.757717 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.757761 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.758110 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.758137 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.758171 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:07.758187 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:07.758397 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:07.758407 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:07.758616 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.758632 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:07.758733 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:07.758790 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:07.758889 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:07.758968 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:07.862665 1013451 ssh_runner.go:195] Run: systemctl --version
	I0127 14:06:07.869339 1013451 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 14:06:08.030804 1013451 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 14:06:08.038146 1013451 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 14:06:08.038222 1013451 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 14:06:08.055525 1013451 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 14:06:08.055564 1013451 start.go:495] detecting cgroup driver to use...
	I0127 14:06:08.055650 1013451 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 14:06:08.072349 1013451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 14:06:08.087838 1013451 docker.go:217] disabling cri-docker service (if available) ...
	I0127 14:06:08.087904 1013451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 14:06:08.103124 1013451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 14:06:08.119044 1013451 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 14:06:08.243455 1013451 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 14:06:08.410960 1013451 docker.go:233] disabling docker service ...
	I0127 14:06:08.411040 1013451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 14:06:08.425578 1013451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 14:06:08.438593 1013451 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 14:06:08.564242 1013451 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 14:06:08.678221 1013451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 14:06:08.692806 1013451 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 14:06:08.713320 1013451 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 14:06:08.713400 1013451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.724369 1013451 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 14:06:08.724451 1013451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.735585 1013451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.746053 1013451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.756606 1013451 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 14:06:08.767332 1013451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.777994 1013451 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.795855 1013451 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 14:06:08.806376 1013451 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 14:06:08.815691 1013451 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 14:06:08.815764 1013451 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 14:06:08.828215 1013451 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 14:06:08.837677 1013451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:06:08.971639 1013451 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 14:06:09.063916 1013451 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 14:06:09.064038 1013451 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 14:06:09.069097 1013451 start.go:563] Will wait 60s for crictl version
	I0127 14:06:09.069188 1013451 ssh_runner.go:195] Run: which crictl
	I0127 14:06:09.073113 1013451 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 14:06:09.113259 1013451 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 14:06:09.113366 1013451 ssh_runner.go:195] Run: crio --version
	I0127 14:06:09.142504 1013451 ssh_runner.go:195] Run: crio --version
	I0127 14:06:09.173583 1013451 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 14:06:09.174862 1013451 main.go:141] libmachine: (addons-097644) Calling .GetIP
	I0127 14:06:09.177395 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:09.177812 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:09.177839 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:09.178071 1013451 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 14:06:09.182188 1013451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:06:09.194695 1013451 kubeadm.go:883] updating cluster {Name:addons-097644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-097644 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 14:06:09.194860 1013451 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:06:09.194924 1013451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:06:09.227895 1013451 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 14:06:09.227979 1013451 ssh_runner.go:195] Run: which lz4
	I0127 14:06:09.232384 1013451 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 14:06:09.236534 1013451 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 14:06:09.236573 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 14:06:10.668374 1013451 crio.go:462] duration metric: took 1.436016004s to copy over tarball
	I0127 14:06:10.668456 1013451 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 14:06:12.991225 1013451 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.322734481s)
	I0127 14:06:12.991265 1013451 crio.go:469] duration metric: took 2.322855117s to extract the tarball
	I0127 14:06:12.991298 1013451 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 14:06:13.029341 1013451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 14:06:13.076231 1013451 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 14:06:13.076261 1013451 cache_images.go:84] Images are preloaded, skipping loading
	I0127 14:06:13.076271 1013451 kubeadm.go:934] updating node { 192.168.39.228 8443 v1.32.1 crio true true} ...
	I0127 14:06:13.076414 1013451 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-097644 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-097644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 14:06:13.076504 1013451 ssh_runner.go:195] Run: crio config
	I0127 14:06:13.126305 1013451 cni.go:84] Creating CNI manager for ""
	I0127 14:06:13.126332 1013451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:06:13.126348 1013451 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 14:06:13.126373 1013451 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.228 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-097644 NodeName:addons-097644 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 14:06:13.126544 1013451 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-097644"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.228"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.228"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 14:06:13.126625 1013451 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 14:06:13.136556 1013451 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 14:06:13.136615 1013451 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 14:06:13.146362 1013451 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0127 14:06:13.163788 1013451 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 14:06:13.180741 1013451 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0127 14:06:13.198243 1013451 ssh_runner.go:195] Run: grep 192.168.39.228	control-plane.minikube.internal$ /etc/hosts
	I0127 14:06:13.202384 1013451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 14:06:13.214765 1013451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:06:13.343136 1013451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:06:13.360886 1013451 certs.go:68] Setting up /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644 for IP: 192.168.39.228
	I0127 14:06:13.360930 1013451 certs.go:194] generating shared ca certs ...
	I0127 14:06:13.360952 1013451 certs.go:226] acquiring lock for ca certs: {Name:mk0f815247e4bef2bde3ddcae95639d1cd9cab24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.361149 1013451 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key
	I0127 14:06:13.420822 1013451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt ...
	I0127 14:06:13.420879 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt: {Name:mkc9e8d9cd31bad89b914a0e39146cbc4cb9a566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.421227 1013451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key ...
	I0127 14:06:13.421256 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key: {Name:mk54337b6f7f11134a1a57c50e00b3a25a5764c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.421401 1013451 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key
	I0127 14:06:13.671791 1013451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt ...
	I0127 14:06:13.671827 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt: {Name:mkdf635bff813871fb0a8f71a2bc8202826329c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.672076 1013451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key ...
	I0127 14:06:13.672097 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key: {Name:mkb62b21eecb2941c4e1d8ed131c001defc5b97f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.672212 1013451 certs.go:256] generating profile certs ...
	I0127 14:06:13.672327 1013451 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.key
	I0127 14:06:13.672363 1013451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt with IP's: []
	I0127 14:06:13.991379 1013451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt ...
	I0127 14:06:13.991415 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: {Name:mk7115664fd0816a20da8202516a46d36538c4b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.991616 1013451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.key ...
	I0127 14:06:13.991638 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.key: {Name:mkbc457d424e6b80c2d9c2572cbd34113ffac2c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:13.991748 1013451 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.key.dc53463b
	I0127 14:06:13.991771 1013451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.crt.dc53463b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.228]
	I0127 14:06:14.087652 1013451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.crt.dc53463b ...
	I0127 14:06:14.087693 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.crt.dc53463b: {Name:mk22529933d8ca851610043569adad4d85cdb151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:14.087885 1013451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.key.dc53463b ...
	I0127 14:06:14.087904 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.key.dc53463b: {Name:mk9f9822d6229d3d1127240b0286c22fc9ac2b51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:14.088018 1013451 certs.go:381] copying /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.crt.dc53463b -> /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.crt
	I0127 14:06:14.088115 1013451 certs.go:385] copying /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.key.dc53463b -> /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.key
	I0127 14:06:14.088186 1013451 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.key
	I0127 14:06:14.088214 1013451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.crt with IP's: []
	I0127 14:06:14.315571 1013451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.crt ...
	I0127 14:06:14.315616 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.crt: {Name:mkf7f0dd114b37a403559f311ca206dc0dfaf354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:14.315850 1013451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.key ...
	I0127 14:06:14.315872 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.key: {Name:mk7c251de1f033a991791c5bacc6c6b2e96630a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:14.316112 1013451 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 14:06:14.316168 1013451 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem (1078 bytes)
	I0127 14:06:14.316208 1013451 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem (1123 bytes)
	I0127 14:06:14.316249 1013451 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem (1679 bytes)
	I0127 14:06:14.317102 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 14:06:14.347128 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 14:06:14.372136 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 14:06:14.397562 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 14:06:14.422996 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 14:06:14.448211 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 14:06:14.474009 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 14:06:14.501190 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 14:06:14.526766 1013451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 14:06:14.552500 1013451 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 14:06:14.570395 1013451 ssh_runner.go:195] Run: openssl version
	I0127 14:06:14.576450 1013451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 14:06:14.588501 1013451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:06:14.593391 1013451 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 14:06 /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:06:14.593460 1013451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 14:06:14.599581 1013451 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 14:06:14.612023 1013451 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 14:06:14.616483 1013451 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 14:06:14.616554 1013451 kubeadm.go:392] StartCluster: {Name:addons-097644 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-097644 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:06:14.616661 1013451 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 14:06:14.616711 1013451 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 14:06:14.653932 1013451 cri.go:89] found id: ""
	I0127 14:06:14.654019 1013451 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 14:06:14.665367 1013451 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 14:06:14.675999 1013451 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 14:06:14.686503 1013451 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 14:06:14.686529 1013451 kubeadm.go:157] found existing configuration files:
	
	I0127 14:06:14.686587 1013451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 14:06:14.696362 1013451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 14:06:14.696421 1013451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 14:06:14.706997 1013451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 14:06:14.717082 1013451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 14:06:14.717154 1013451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 14:06:14.727528 1013451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 14:06:14.737554 1013451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 14:06:14.737625 1013451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 14:06:14.748328 1013451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 14:06:14.758305 1013451 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 14:06:14.758388 1013451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 14:06:14.768545 1013451 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 14:06:14.824105 1013451 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 14:06:14.824161 1013451 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 14:06:14.954367 1013451 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 14:06:14.954546 1013451 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 14:06:14.954688 1013451 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 14:06:14.966475 1013451 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 14:06:15.100500 1013451 out.go:235]   - Generating certificates and keys ...
	I0127 14:06:15.100639 1013451 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 14:06:15.100710 1013451 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 14:06:15.100827 1013451 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 14:06:15.512511 1013451 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 14:06:15.776387 1013451 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 14:06:16.241691 1013451 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 14:06:16.495803 1013451 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 14:06:16.496119 1013451 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-097644 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I0127 14:06:16.692825 1013451 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 14:06:16.693029 1013451 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-097644 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I0127 14:06:16.951084 1013451 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 14:06:17.150130 1013451 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 14:06:17.461000 1013451 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 14:06:17.461403 1013451 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 14:06:17.774344 1013451 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 14:06:18.080863 1013451 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 14:06:18.696649 1013451 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 14:06:18.826173 1013451 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 14:06:18.926775 1013451 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 14:06:18.928106 1013451 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 14:06:18.932397 1013451 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 14:06:18.934351 1013451 out.go:235]   - Booting up control plane ...
	I0127 14:06:18.934472 1013451 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 14:06:18.934569 1013451 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 14:06:18.934649 1013451 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 14:06:18.950262 1013451 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 14:06:18.956527 1013451 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 14:06:18.956606 1013451 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 14:06:19.083734 1013451 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 14:06:19.083865 1013451 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 14:06:20.084411 1013451 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001422431s
	I0127 14:06:20.084523 1013451 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 14:06:25.084312 1013451 kubeadm.go:310] [api-check] The API server is healthy after 5.002685853s
	I0127 14:06:25.096890 1013451 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 14:06:25.113838 1013451 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 14:06:25.145234 1013451 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 14:06:25.145454 1013451 kubeadm.go:310] [mark-control-plane] Marking the node addons-097644 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 14:06:25.158810 1013451 kubeadm.go:310] [bootstrap-token] Using token: eelxhi.iqqoealhyjynagyr
	I0127 14:06:25.160144 1013451 out.go:235]   - Configuring RBAC rules ...
	I0127 14:06:25.160292 1013451 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 14:06:25.166578 1013451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 14:06:25.179189 1013451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 14:06:25.182767 1013451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 14:06:25.186739 1013451 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 14:06:25.193800 1013451 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 14:06:25.491524 1013451 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 14:06:25.946419 1013451 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 14:06:26.491307 1013451 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 14:06:26.491353 1013451 kubeadm.go:310] 
	I0127 14:06:26.491436 1013451 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 14:06:26.491446 1013451 kubeadm.go:310] 
	I0127 14:06:26.491581 1013451 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 14:06:26.491591 1013451 kubeadm.go:310] 
	I0127 14:06:26.491622 1013451 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 14:06:26.491706 1013451 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 14:06:26.491763 1013451 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 14:06:26.491771 1013451 kubeadm.go:310] 
	I0127 14:06:26.491815 1013451 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 14:06:26.491823 1013451 kubeadm.go:310] 
	I0127 14:06:26.491902 1013451 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 14:06:26.491927 1013451 kubeadm.go:310] 
	I0127 14:06:26.491976 1013451 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 14:06:26.492050 1013451 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 14:06:26.492110 1013451 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 14:06:26.492120 1013451 kubeadm.go:310] 
	I0127 14:06:26.492192 1013451 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 14:06:26.492266 1013451 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 14:06:26.492279 1013451 kubeadm.go:310] 
	I0127 14:06:26.492347 1013451 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token eelxhi.iqqoealhyjynagyr \
	I0127 14:06:26.492435 1013451 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 \
	I0127 14:06:26.492455 1013451 kubeadm.go:310] 	--control-plane 
	I0127 14:06:26.492462 1013451 kubeadm.go:310] 
	I0127 14:06:26.492535 1013451 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 14:06:26.492542 1013451 kubeadm.go:310] 
	I0127 14:06:26.492655 1013451 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token eelxhi.iqqoealhyjynagyr \
	I0127 14:06:26.492807 1013451 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 
	I0127 14:06:26.493374 1013451 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 14:06:26.493713 1013451 cni.go:84] Creating CNI manager for ""
	I0127 14:06:26.493730 1013451 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:06:26.495461 1013451 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 14:06:26.496737 1013451 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 14:06:26.508895 1013451 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 14:06:26.531487 1013451 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 14:06:26.531595 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-097644 minikube.k8s.io/updated_at=2025_01_27T14_06_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743 minikube.k8s.io/name=addons-097644 minikube.k8s.io/primary=true
	I0127 14:06:26.531600 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:26.660204 1013451 ops.go:34] apiserver oom_adj: -16
	I0127 14:06:26.660344 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:27.161225 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:27.660827 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:28.161152 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:28.661068 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:29.160473 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:29.661076 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:30.161022 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:30.660596 1013451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 14:06:30.801320 1013451 kubeadm.go:1113] duration metric: took 4.269789638s to wait for elevateKubeSystemPrivileges
	I0127 14:06:30.801428 1013451 kubeadm.go:394] duration metric: took 16.184866129s to StartCluster
	I0127 14:06:30.801479 1013451 settings.go:142] acquiring lock: {Name:mk76722a8563186e0a733747d866232f054026c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:30.801625 1013451 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 14:06:30.802052 1013451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:06:30.802521 1013451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 14:06:30.802558 1013451 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 14:06:30.802614 1013451 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0127 14:06:30.802733 1013451 addons.go:69] Setting yakd=true in profile "addons-097644"
	I0127 14:06:30.802749 1013451 addons.go:69] Setting inspektor-gadget=true in profile "addons-097644"
	I0127 14:06:30.802771 1013451 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-097644"
	I0127 14:06:30.802767 1013451 addons.go:69] Setting default-storageclass=true in profile "addons-097644"
	I0127 14:06:30.802782 1013451 addons.go:238] Setting addon inspektor-gadget=true in "addons-097644"
	I0127 14:06:30.802787 1013451 addons.go:69] Setting registry=true in profile "addons-097644"
	I0127 14:06:30.802789 1013451 config.go:182] Loaded profile config "addons-097644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:06:30.802795 1013451 addons.go:69] Setting ingress=true in profile "addons-097644"
	I0127 14:06:30.802809 1013451 addons.go:69] Setting volcano=true in profile "addons-097644"
	I0127 14:06:30.802819 1013451 addons.go:238] Setting addon ingress=true in "addons-097644"
	I0127 14:06:30.802820 1013451 addons.go:238] Setting addon volcano=true in "addons-097644"
	I0127 14:06:30.802827 1013451 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-097644"
	I0127 14:06:30.802840 1013451 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-097644"
	I0127 14:06:30.802851 1013451 addons.go:69] Setting cloud-spanner=true in profile "addons-097644"
	I0127 14:06:30.802867 1013451 addons.go:238] Setting addon cloud-spanner=true in "addons-097644"
	I0127 14:06:30.802875 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.802797 1013451 addons.go:238] Setting addon registry=true in "addons-097644"
	I0127 14:06:30.802879 1013451 addons.go:69] Setting volumesnapshots=true in profile "addons-097644"
	I0127 14:06:30.802883 1013451 addons.go:69] Setting gcp-auth=true in profile "addons-097644"
	I0127 14:06:30.802895 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.802901 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.802905 1013451 addons.go:238] Setting addon volumesnapshots=true in "addons-097644"
	I0127 14:06:30.802916 1013451 mustload.go:65] Loading cluster: addons-097644
	I0127 14:06:30.802923 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.803032 1013451 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-097644"
	I0127 14:06:30.803073 1013451 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-097644"
	I0127 14:06:30.803102 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.803126 1013451 config.go:182] Loaded profile config "addons-097644": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:06:30.802805 1013451 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-097644"
	I0127 14:06:30.803177 1013451 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-097644"
	I0127 14:06:30.803393 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.803444 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.803447 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.802869 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.803474 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.803497 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.803523 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.803613 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.803651 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.803721 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.803736 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.803760 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.803765 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.802783 1013451 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-097644"
	I0127 14:06:30.803814 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.803871 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.802818 1013451 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-097644"
	I0127 14:06:30.802800 1013451 addons.go:69] Setting storage-provisioner=true in profile "addons-097644"
	I0127 14:06:30.804156 1013451 addons.go:238] Setting addon storage-provisioner=true in "addons-097644"
	I0127 14:06:30.804206 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.804439 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.804477 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.802876 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.804686 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.804708 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.803834 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.802767 1013451 addons.go:69] Setting metrics-server=true in profile "addons-097644"
	I0127 14:06:30.804942 1013451 addons.go:238] Setting addon metrics-server=true in "addons-097644"
	I0127 14:06:30.804972 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.805340 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.805359 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.805372 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.805400 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.813160 1013451 out.go:177] * Verifying Kubernetes components...
	I0127 14:06:30.802876 1013451 addons.go:69] Setting ingress-dns=true in profile "addons-097644"
	I0127 14:06:30.813527 1013451 addons.go:238] Setting addon ingress-dns=true in "addons-097644"
	I0127 14:06:30.813587 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.814019 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.814072 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.802762 1013451 addons.go:238] Setting addon yakd=true in "addons-097644"
	I0127 14:06:30.814349 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.814935 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.814996 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.815147 1013451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 14:06:30.802869 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.804130 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.815331 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.824258 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33327
	I0127 14:06:30.825578 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44489
	I0127 14:06:30.829296 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I0127 14:06:30.829387 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40093
	I0127 14:06:30.829572 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.829610 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.829612 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.829656 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.831082 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.831098 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.831220 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.831225 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.831765 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.831788 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.831892 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.831912 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.832037 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.832062 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.832195 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.832345 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.832357 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.832802 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.832840 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.833353 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.833374 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.833419 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.833641 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.834032 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.834058 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.834072 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.834105 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.838453 1013451 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-097644"
	I0127 14:06:30.838522 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.838935 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.838995 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.840603 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44355
	I0127 14:06:30.843186 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46385
	I0127 14:06:30.843795 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.844312 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.844326 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.844777 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.844960 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.849282 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.849730 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.849777 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.863460 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44373
	I0127 14:06:30.864087 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.864757 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.864784 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.865181 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.865783 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.865833 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.873911 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.874553 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I0127 14:06:30.874638 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.874658 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.875026 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.875592 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.875633 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.876937 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45057
	I0127 14:06:30.877116 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44787
	I0127 14:06:30.877252 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.878004 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.878029 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.878487 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.879164 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.879208 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.879477 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41067
	I0127 14:06:30.879682 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34461
	I0127 14:06:30.880336 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.880358 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41223
	I0127 14:06:30.880765 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.881119 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.881138 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.881232 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I0127 14:06:30.881435 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.881449 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.881871 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.881945 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.881977 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34179
	I0127 14:06:30.882565 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.882610 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.882853 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.883356 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.883373 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.883436 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.883527 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.883562 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.883847 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.883908 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.884462 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.884501 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.884735 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.884897 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.884907 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.885047 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.885329 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.885475 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.885487 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.885686 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.885815 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.885828 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.885886 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.886415 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.886456 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.886895 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.886966 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.886997 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.887517 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.887560 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.887602 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.887813 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.890600 1013451 addons.go:238] Setting addon default-storageclass=true in "addons-097644"
	I0127 14:06:30.890648 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:30.890997 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.891046 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.891842 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.894240 1013451 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0127 14:06:30.894842 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I0127 14:06:30.895286 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.895416 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0127 14:06:30.895847 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.895866 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.896029 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.896491 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.896510 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.896934 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.897068 1013451 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 14:06:30.897222 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.898593 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46541
	I0127 14:06:30.899242 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38531
	I0127 14:06:30.899629 1013451 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 14:06:30.899790 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.899976 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.900109 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.900506 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.900557 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.900630 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.900646 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.900769 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.900778 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.901107 1013451 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 14:06:30.901132 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0127 14:06:30.901138 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.901155 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.901326 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.903634 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.904294 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.906030 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.906143 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.906825 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.906847 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.907181 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.907365 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.907455 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.907556 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.907888 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.908168 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.910334 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0127 14:06:30.910342 1013451 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0127 14:06:30.912373 1013451 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0127 14:06:30.912395 1013451 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0127 14:06:30.912423 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.912492 1013451 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0127 14:06:30.912507 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0127 14:06:30.912528 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.916227 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.916724 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.916749 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.916943 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.917159 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.917417 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.917631 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.917987 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.918511 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.918550 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.918760 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.918938 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.919079 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.919222 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.923687 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44889
	I0127 14:06:30.924139 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.924940 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.924966 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.925060 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36471
	I0127 14:06:30.925654 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.926360 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.926379 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.926947 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.927207 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.928312 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.929602 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.930009 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:30.930023 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:30.932400 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:30.932438 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:30.932446 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:30.932454 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:30.932461 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:30.932905 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:30.932938 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:30.932946 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	W0127 14:06:30.933068 1013451 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0127 14:06:30.933415 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.935674 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
	I0127 14:06:30.935720 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39701
	I0127 14:06:30.935830 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44721
	I0127 14:06:30.936334 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.936432 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.936950 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.936971 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.937146 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.937165 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.937592 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.937657 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0127 14:06:30.937811 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.938038 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.938478 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.938564 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.938581 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.938719 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.938993 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.939067 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43135
	I0127 14:06:30.939447 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.940030 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.940054 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.940132 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.940643 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.940690 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.941538 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.941561 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.941618 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.941662 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.942168 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.942229 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.942674 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35903
	I0127 14:06:30.942829 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:30.942877 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:30.943179 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.943303 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.943656 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.943677 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.944080 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0127 14:06:30.944110 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.944168 1013451 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0127 14:06:30.944396 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.944907 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40807
	I0127 14:06:30.945729 1013451 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 14:06:30.945746 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0127 14:06:30.945767 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.947021 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0127 14:06:30.947720 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42045
	I0127 14:06:30.947740 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.947803 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37815
	I0127 14:06:30.948506 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.948668 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.948768 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.949312 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.949184 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.949424 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.949777 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.949798 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.949814 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.949830 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.949831 1013451 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 14:06:30.950788 1013451 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0127 14:06:30.949879 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0127 14:06:30.950166 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.950190 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.951652 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.951908 1013451 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:06:30.951930 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 14:06:30.951955 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.952269 1013451 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 14:06:30.952290 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0127 14:06:30.952314 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.952564 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.952635 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36703
	I0127 14:06:30.952847 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.953218 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.953829 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.953849 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.953949 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.954442 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.954245 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0127 14:06:30.954648 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.957753 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0127 14:06:30.957955 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.958028 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.958064 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.958865 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.958661 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.958740 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.959195 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.959217 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.959357 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.959389 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.959494 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.959717 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.959903 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.960115 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.960228 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.960239 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.960472 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.960534 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.960555 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.960505 1013451 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0127 14:06:30.960521 1013451 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0127 14:06:30.960696 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.960722 1013451 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.36.0
	I0127 14:06:30.960806 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.961484 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0127 14:06:30.962231 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.962333 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.962472 1013451 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0127 14:06:30.962490 1013451 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 14:06:30.962854 1013451 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 14:06:30.962875 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.962916 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39873
	I0127 14:06:30.962788 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.963147 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.963248 1013451 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0127 14:06:30.963288 1013451 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0127 14:06:30.963312 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.963411 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.963647 1013451 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 14:06:30.963669 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0127 14:06:30.963686 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.964105 1013451 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0127 14:06:30.964126 1013451 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0127 14:06:30.964145 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.964611 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.964641 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.965199 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.965450 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.965974 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0127 14:06:30.967214 1013451 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0127 14:06:30.967970 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.968624 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0127 14:06:30.968647 1013451 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0127 14:06:30.968669 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.968879 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.969411 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.969574 1013451 out.go:177]   - Using image docker.io/registry:2.8.3
	I0127 14:06:30.969589 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.969904 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42225
	I0127 14:06:30.969929 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.969945 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.970191 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.970321 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.970337 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.970367 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.970441 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.970532 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.970725 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.971134 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.971167 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.971138 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.971183 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.971292 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.971326 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.971354 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.971404 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.971423 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.971578 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.971627 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.971673 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.971859 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.971884 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.971921 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.971936 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.971961 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.972328 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.972505 1013451 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0127 14:06:30.972529 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.972896 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.973056 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.973650 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.973898 1013451 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 14:06:30.973918 1013451 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 14:06:30.973937 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.974033 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.974299 1013451 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0127 14:06:30.974313 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0127 14:06:30.974330 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.974535 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.974560 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.974828 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.975014 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.975139 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.975250 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	W0127 14:06:30.976492 1013451 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:45740->192.168.39.228:22: read: connection reset by peer
	I0127 14:06:30.976527 1013451 retry.go:31] will retry after 249.98777ms: ssh: handshake failed: read tcp 192.168.39.1:45740->192.168.39.228:22: read: connection reset by peer
	I0127 14:06:30.977856 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.977979 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.978359 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.978399 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.978592 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.978603 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.978618 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.978798 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.978858 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.978981 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.979003 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.979124 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:30.979153 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:30.979292 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	W0127 14:06:30.980391 1013451 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:45758->192.168.39.228:22: read: connection reset by peer
	I0127 14:06:30.980418 1013451 retry.go:31] will retry after 282.19412ms: ssh: handshake failed: read tcp 192.168.39.1:45758->192.168.39.228:22: read: connection reset by peer
	I0127 14:06:30.986758 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I0127 14:06:30.987211 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:30.987797 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:30.987824 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:30.988141 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:30.988375 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:30.990245 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:30.992302 1013451 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0127 14:06:30.993765 1013451 out.go:177]   - Using image docker.io/busybox:stable
	I0127 14:06:30.995107 1013451 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 14:06:30.995123 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0127 14:06:30.995143 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:30.998641 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.999124 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:30.999163 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:30.999454 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:30.999690 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:30.999838 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:31.000028 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:31.232253 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0127 14:06:31.331831 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 14:06:31.347794 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 14:06:31.426357 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 14:06:31.491578 1013451 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0127 14:06:31.491606 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0127 14:06:31.512213 1013451 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0127 14:06:31.512250 1013451 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0127 14:06:31.515355 1013451 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 14:06:31.515377 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0127 14:06:31.516574 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 14:06:31.525098 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 14:06:31.533157 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 14:06:31.559468 1013451 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0127 14:06:31.559521 1013451 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0127 14:06:31.575968 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 14:06:31.648773 1013451 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0127 14:06:31.648804 1013451 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0127 14:06:31.655677 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0127 14:06:31.655706 1013451 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0127 14:06:31.683163 1013451 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 14:06:31.683200 1013451 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 14:06:31.694871 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0127 14:06:31.704356 1013451 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0127 14:06:31.704382 1013451 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0127 14:06:31.744904 1013451 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0127 14:06:31.744940 1013451 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0127 14:06:31.903974 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0127 14:06:31.904017 1013451 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0127 14:06:31.964569 1013451 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0127 14:06:31.964605 1013451 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0127 14:06:31.969199 1013451 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:06:31.969220 1013451 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 14:06:32.044200 1013451 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0127 14:06:32.044228 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0127 14:06:32.127179 1013451 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0127 14:06:32.127220 1013451 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0127 14:06:32.135626 1013451 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.333055604s)
	I0127 14:06:32.135659 1013451 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.320384321s)
	I0127 14:06:32.135752 1013451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 14:06:32.135838 1013451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 14:06:32.149940 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0127 14:06:32.149986 1013451 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0127 14:06:32.315159 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0127 14:06:32.343031 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0127 14:06:32.343069 1013451 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0127 14:06:32.360427 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 14:06:32.363253 1013451 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 14:06:32.363282 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0127 14:06:32.374156 1013451 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0127 14:06:32.374180 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0127 14:06:32.467818 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0127 14:06:32.467851 1013451 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0127 14:06:32.668364 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 14:06:32.710295 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0127 14:06:32.747185 1013451 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0127 14:06:32.747216 1013451 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0127 14:06:33.065468 1013451 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0127 14:06:33.065504 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0127 14:06:33.337642 1013451 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0127 14:06:33.337736 1013451 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0127 14:06:33.876528 1013451 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0127 14:06:33.876560 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0127 14:06:34.139997 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.907702721s)
	I0127 14:06:34.140087 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:34.140107 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:34.140458 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:34.140487 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:34.140506 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:34.140527 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:34.140800 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:34.140818 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:34.200127 1013451 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0127 14:06:34.200161 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0127 14:06:34.562411 1013451 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 14:06:34.562443 1013451 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0127 14:06:34.714298 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 14:06:36.621630 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.289747033s)
	I0127 14:06:36.621713 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:36.621733 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:36.621631 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.273802077s)
	I0127 14:06:36.621792 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:36.621810 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:36.622093 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:36.622103 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:36.622131 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:36.622142 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:36.622152 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:36.622153 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:36.622192 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:36.622208 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:36.622223 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:36.622252 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:36.622394 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:36.622422 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:36.622480 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:36.622510 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:36.622521 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:36.760227 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:36.760259 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:36.760715 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:36.760775 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:36.760796 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:37.753882 1013451 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0127 14:06:37.753936 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:37.757253 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:37.757684 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:37.757716 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:37.757878 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:37.758108 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:37.758286 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:37.758457 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:38.134471 1013451 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0127 14:06:38.320566 1013451 addons.go:238] Setting addon gcp-auth=true in "addons-097644"
	I0127 14:06:38.320644 1013451 host.go:66] Checking if "addons-097644" exists ...
	I0127 14:06:38.321069 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:38.321130 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:38.336729 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0127 14:06:38.337259 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:38.337802 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:38.337830 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:38.338264 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:38.338744 1013451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:06:38.338792 1013451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:06:38.354738 1013451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34493
	I0127 14:06:38.355352 1013451 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:06:38.355944 1013451 main.go:141] libmachine: Using API Version  1
	I0127 14:06:38.355968 1013451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:06:38.356332 1013451 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:06:38.356545 1013451 main.go:141] libmachine: (addons-097644) Calling .GetState
	I0127 14:06:38.358363 1013451 main.go:141] libmachine: (addons-097644) Calling .DriverName
	I0127 14:06:38.358617 1013451 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0127 14:06:38.358647 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHHostname
	I0127 14:06:38.361268 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:38.361655 1013451 main.go:141] libmachine: (addons-097644) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:d4:27", ip: ""} in network mk-addons-097644: {Iface:virbr1 ExpiryTime:2025-01-27 15:05:59 +0000 UTC Type:0 Mac:52:54:00:9d:d4:27 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:addons-097644 Clientid:01:52:54:00:9d:d4:27}
	I0127 14:06:38.361682 1013451 main.go:141] libmachine: (addons-097644) DBG | domain addons-097644 has defined IP address 192.168.39.228 and MAC address 52:54:00:9d:d4:27 in network mk-addons-097644
	I0127 14:06:38.361861 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHPort
	I0127 14:06:38.362040 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHKeyPath
	I0127 14:06:38.362196 1013451 main.go:141] libmachine: (addons-097644) Calling .GetSSHUsername
	I0127 14:06:38.362330 1013451 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/addons-097644/id_rsa Username:docker}
	I0127 14:06:39.535502 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.109096844s)
	I0127 14:06:39.535546 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.010420009s)
	I0127 14:06:39.535517 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.018902491s)
	I0127 14:06:39.535592 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.535581 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.535619 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.535628 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.535636 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.002450449s)
	I0127 14:06:39.535631 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.535671 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.535683 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.95968766s)
	I0127 14:06:39.535709 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.535724 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.535686 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.535756 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.840858115s)
	I0127 14:06:39.535612 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.535782 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.535791 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.535840 1013451 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.400060603s)
	I0127 14:06:39.535876 1013451 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.40001441s)
	I0127 14:06:39.535893 1013451 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0127 14:06:39.535966 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.220769901s)
	I0127 14:06:39.536002 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.536013 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.536138 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.175676841s)
	I0127 14:06:39.536161 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.536171 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.536302 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.867903313s)
	W0127 14:06:39.536330 1013451 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0127 14:06:39.536367 1013451 retry.go:31] will retry after 296.657665ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0127 14:06:39.536420 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.826074832s)
	I0127 14:06:39.536451 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.536464 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.536976 1013451 node_ready.go:35] waiting up to 6m0s for node "addons-097644" to be "Ready" ...
	I0127 14:06:39.538246 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538268 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538278 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538286 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538255 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538334 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538358 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538372 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538384 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538395 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538416 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538437 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538457 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538472 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538495 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538521 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538546 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538560 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538568 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538581 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538594 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538529 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538632 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538641 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538644 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538649 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538655 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538658 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538544 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538662 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538437 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538666 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538707 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538732 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538738 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538747 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.538754 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538954 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.538987 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.538994 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.538457 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.539033 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.539043 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.539051 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.538631 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.539103 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.539111 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.539291 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.539323 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.539331 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.539465 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.539494 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.539501 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.540397 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.540437 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.540445 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.540507 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.540538 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.540545 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.540555 1013451 addons.go:479] Verifying addon metrics-server=true in "addons-097644"
	I0127 14:06:39.540638 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.540659 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.540664 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.540670 1013451 addons.go:479] Verifying addon ingress=true in "addons-097644"
	I0127 14:06:39.540826 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.540849 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.540856 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.540865 1013451 addons.go:479] Verifying addon registry=true in "addons-097644"
	I0127 14:06:39.541201 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.541235 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.541251 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.541333 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.541374 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.541381 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.543517 1013451 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-097644 service yakd-dashboard -n yakd-dashboard
	
	I0127 14:06:39.543527 1013451 out.go:177] * Verifying ingress addon...
	I0127 14:06:39.543529 1013451 out.go:177] * Verifying registry addon...
	I0127 14:06:39.545868 1013451 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0127 14:06:39.546062 1013451 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0127 14:06:39.551413 1013451 node_ready.go:49] node "addons-097644" has status "Ready":"True"
	I0127 14:06:39.551444 1013451 node_ready.go:38] duration metric: took 14.446121ms for node "addons-097644" to be "Ready" ...
	I0127 14:06:39.551456 1013451 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:06:39.591856 1013451 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0127 14:06:39.591887 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:39.591997 1013451 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0127 14:06:39.592022 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:39.604544 1013451 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:39.620217 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:39.620245 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:39.620663 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:39.620712 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:39.620733 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:39.833775 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 14:06:40.042238 1013451 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-097644" context rescaled to 1 replicas
	I0127 14:06:40.056864 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:40.057325 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:40.574204 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:40.574352 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:40.691503 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.977142989s)
	I0127 14:06:40.691571 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:40.691567 1013451 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.332922668s)
	I0127 14:06:40.691586 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:40.692022 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:40.692044 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:40.692055 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:40.692080 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:40.692356 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:40.692379 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:40.692393 1013451 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-097644"
	I0127 14:06:40.693820 1013451 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 14:06:40.693819 1013451 out.go:177] * Verifying csi-hostpath-driver addon...
	I0127 14:06:40.695829 1013451 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0127 14:06:40.696785 1013451 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0127 14:06:40.697165 1013451 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0127 14:06:40.697193 1013451 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0127 14:06:40.719430 1013451 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0127 14:06:40.719457 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:40.802113 1013451 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0127 14:06:40.802145 1013451 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0127 14:06:40.994953 1013451 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 14:06:40.995010 1013451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0127 14:06:41.051371 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:41.055369 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:41.085073 1013451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 14:06:41.212968 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:41.550636 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:41.551229 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:41.619011 1013451 pod_ready.go:103] pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:41.704620 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:42.054408 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:42.054655 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:42.202621 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:42.508558 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.674717249s)
	I0127 14:06:42.508636 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:42.508654 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:42.508962 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:42.508984 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:42.508994 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:42.509010 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:42.509270 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:42.509297 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:42.509297 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:42.550865 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:42.552139 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:42.700968 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:43.051426 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:43.051775 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:43.219737 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:43.654172 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:43.659020 1013451 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.573886282s)
	I0127 14:06:43.659089 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:43.659111 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:43.659423 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:43.659520 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:43.659535 1013451 main.go:141] libmachine: Making call to close driver server
	I0127 14:06:43.659544 1013451 main.go:141] libmachine: (addons-097644) Calling .Close
	I0127 14:06:43.659496 1013451 main.go:141] libmachine: (addons-097644) DBG | Closing plugin on server side
	I0127 14:06:43.659831 1013451 main.go:141] libmachine: Successfully made call to close driver server
	I0127 14:06:43.659850 1013451 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 14:06:43.661096 1013451 addons.go:479] Verifying addon gcp-auth=true in "addons-097644"
	I0127 14:06:43.662980 1013451 out.go:177] * Verifying gcp-auth addon...
	I0127 14:06:43.665443 1013451 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0127 14:06:43.667959 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:43.686297 1013451 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0127 14:06:43.686332 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:43.698333 1013451 pod_ready.go:103] pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:43.752116 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:44.051507 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:44.051642 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:44.169983 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:44.202197 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:44.550596 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:44.551695 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:44.669572 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:44.701465 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:45.051101 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:45.051498 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:45.168566 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:45.201519 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:45.551156 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:45.552669 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:45.675646 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:45.702063 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:46.052220 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:46.052234 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:46.112080 1013451 pod_ready.go:103] pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:46.168904 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:46.201719 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:46.551973 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:46.552112 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:46.668877 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:46.701725 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:47.050599 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:47.050979 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:47.169889 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:47.203312 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:47.550817 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:47.551169 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:47.668803 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:47.701344 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:48.053223 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:48.053534 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:48.120721 1013451 pod_ready.go:103] pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:48.172399 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:48.201255 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:48.552152 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:48.562421 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:48.670118 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:48.706743 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:49.056813 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:49.057202 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:49.175007 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:49.207070 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:49.552745 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:49.552809 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:49.670875 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:49.702320 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:50.051877 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:50.052248 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:50.168779 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:50.202479 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:50.551892 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:50.552457 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:50.615652 1013451 pod_ready.go:93] pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:50.615678 1013451 pod_ready.go:82] duration metric: took 11.011100516s for pod "amd-gpu-device-plugin-89xv2" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.615689 1013451 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-f5h88" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.627270 1013451 pod_ready.go:93] pod "coredns-668d6bf9bc-f5h88" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:50.627306 1013451 pod_ready.go:82] duration metric: took 11.610993ms for pod "coredns-668d6bf9bc-f5h88" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.627316 1013451 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-xk7kv" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.632345 1013451 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-xk7kv" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-xk7kv" not found
	I0127 14:06:50.632372 1013451 pod_ready.go:82] duration metric: took 5.049964ms for pod "coredns-668d6bf9bc-xk7kv" in "kube-system" namespace to be "Ready" ...
	E0127 14:06:50.632383 1013451 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-xk7kv" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-xk7kv" not found
	I0127 14:06:50.632390 1013451 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.637099 1013451 pod_ready.go:93] pod "etcd-addons-097644" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:50.637119 1013451 pod_ready.go:82] duration metric: took 4.724126ms for pod "etcd-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.637128 1013451 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.641577 1013451 pod_ready.go:93] pod "kube-apiserver-addons-097644" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:50.641597 1013451 pod_ready.go:82] duration metric: took 4.462666ms for pod "kube-apiserver-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.641605 1013451 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.669462 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:50.706029 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:50.809340 1013451 pod_ready.go:93] pod "kube-controller-manager-addons-097644" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:50.809365 1013451 pod_ready.go:82] duration metric: took 167.752957ms for pod "kube-controller-manager-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:50.809377 1013451 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f4zwd" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:51.050450 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:51.051944 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:51.170085 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:51.202947 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:51.208582 1013451 pod_ready.go:93] pod "kube-proxy-f4zwd" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:51.208606 1013451 pod_ready.go:82] duration metric: took 399.222781ms for pod "kube-proxy-f4zwd" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:51.208616 1013451 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:51.551263 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:51.551705 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:51.608807 1013451 pod_ready.go:93] pod "kube-scheduler-addons-097644" in "kube-system" namespace has status "Ready":"True"
	I0127 14:06:51.608840 1013451 pod_ready.go:82] duration metric: took 400.21695ms for pod "kube-scheduler-addons-097644" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:51.608854 1013451 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace to be "Ready" ...
	I0127 14:06:51.670471 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:51.701367 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:52.050707 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:52.050834 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:52.169284 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:52.200658 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:52.550340 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:52.551185 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:52.668895 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:52.702017 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:53.057413 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:53.057641 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:53.169648 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:53.202006 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:53.550241 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:53.550722 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:53.620587 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:53.669530 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:53.701719 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:54.052792 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:54.053279 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:54.169476 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:54.201306 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:54.551907 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:54.552638 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:54.669077 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:54.701764 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:55.100240 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:55.100296 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:55.182070 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:55.201395 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:55.551761 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:55.551927 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:55.668933 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:55.701923 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:56.050536 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:56.050982 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:56.119811 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:56.168904 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:56.202072 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:56.551874 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:56.552481 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:56.669587 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:56.701617 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:57.050231 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:57.050613 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:57.170169 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:57.201972 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:57.551609 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:57.551795 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:57.670084 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:57.702058 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:58.383183 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:58.383399 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:58.384179 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:58.384242 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:58.387592 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:06:58.550466 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:58.550887 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:58.668764 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:58.701776 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:59.050306 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:59.050697 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:59.169436 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:59.204311 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:06:59.560946 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:06:59.560967 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:06:59.670919 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:06:59.702414 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:00.468343 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:00.468634 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:00.469971 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:00.470230 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:00.475121 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:07:00.551178 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:00.552210 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:00.670053 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:00.702754 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:01.051143 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:01.051753 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:01.169521 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:01.202017 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:01.550952 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:01.551011 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:01.669355 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:01.701492 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:02.054133 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:02.054531 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:02.169554 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:02.201828 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:02.553190 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:02.553417 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:02.616135 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:07:02.669251 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:02.702653 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:03.051556 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:03.052058 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:03.168688 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:03.206615 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:03.552205 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:03.552324 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:03.670459 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:03.705277 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:04.050893 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:04.051564 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:04.169123 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:04.271611 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:04.550873 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:04.551002 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:04.618165 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:07:04.669774 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:04.701982 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:05.050574 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:05.050984 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:05.168730 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:05.201868 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:05.550374 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:05.550418 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:05.668407 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:05.701325 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:06.050944 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:06.051773 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:06.169027 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:06.201826 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:06.550446 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:06.551065 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:07.011171 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:07.012800 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:07:07.014528 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:07.051263 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:07.052394 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:07.168896 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:07.202772 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:07.552036 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:07.552265 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:07.669494 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:07.701789 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:08.050016 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:08.050930 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:08.169153 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:08.201129 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:08.552701 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:08.554461 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:08.669806 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:08.702780 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:09.051527 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:09.051791 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:09.115325 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:07:09.169334 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:09.201659 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:09.550572 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:09.550938 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:09.668878 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:09.701776 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:10.051782 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:10.052645 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:10.168877 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:10.201786 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:10.551300 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:10.551673 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:10.669403 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:10.700959 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:11.051149 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:11.051672 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:11.115643 1013451 pod_ready.go:103] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"False"
	I0127 14:07:11.169733 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:11.202417 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:11.552212 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:11.552243 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:11.671629 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:11.701802 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:12.051799 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:12.054435 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:12.170154 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:12.203930 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:12.557266 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:12.557520 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:12.625739 1013451 pod_ready.go:93] pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace has status "Ready":"True"
	I0127 14:07:12.625769 1013451 pod_ready.go:82] duration metric: took 21.016907428s for pod "metrics-server-7fbb699795-dr2kc" in "kube-system" namespace to be "Ready" ...
	I0127 14:07:12.625780 1013451 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-bs6d4" in "kube-system" namespace to be "Ready" ...
	I0127 14:07:12.635943 1013451 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-bs6d4" in "kube-system" namespace has status "Ready":"True"
	I0127 14:07:12.635969 1013451 pod_ready.go:82] duration metric: took 10.183333ms for pod "nvidia-device-plugin-daemonset-bs6d4" in "kube-system" namespace to be "Ready" ...
	I0127 14:07:12.635988 1013451 pod_ready.go:39] duration metric: took 33.08451816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 14:07:12.636039 1013451 api_server.go:52] waiting for apiserver process to appear ...
	I0127 14:07:12.636109 1013451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:07:12.671346 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:12.688681 1013451 api_server.go:72] duration metric: took 41.886073676s to wait for apiserver process to appear ...
	I0127 14:07:12.688712 1013451 api_server.go:88] waiting for apiserver healthz status ...
	I0127 14:07:12.688736 1013451 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0127 14:07:12.701264 1013451 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
	ok
	I0127 14:07:12.702757 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:12.703236 1013451 api_server.go:141] control plane version: v1.32.1
	I0127 14:07:12.703267 1013451 api_server.go:131] duration metric: took 14.546167ms to wait for apiserver health ...
	I0127 14:07:12.703280 1013451 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 14:07:12.717932 1013451 system_pods.go:59] 18 kube-system pods found
	I0127 14:07:12.717976 1013451 system_pods.go:61] "amd-gpu-device-plugin-89xv2" [7b98e34d-687f-47aa-8a1f-b8c5c016e93e] Running
	I0127 14:07:12.717984 1013451 system_pods.go:61] "coredns-668d6bf9bc-f5h88" [f45297c4-5f83-45a6-9f30-d0b16d29ef1d] Running
	I0127 14:07:12.717995 1013451 system_pods.go:61] "csi-hostpath-attacher-0" [0e65ff6e-fdeb-4e47-a281-58d2846521dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0127 14:07:12.718012 1013451 system_pods.go:61] "csi-hostpath-resizer-0" [f4b69299-7108-4d71-a19f-c8640d4d9d7b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0127 14:07:12.718024 1013451 system_pods.go:61] "csi-hostpathplugin-8jql5" [cdb87938-f761-462d-aaf8-e4a74f0d8e7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 14:07:12.718035 1013451 system_pods.go:61] "etcd-addons-097644" [15355068-d7bd-4c15-8402-670f796142e0] Running
	I0127 14:07:12.718043 1013451 system_pods.go:61] "kube-apiserver-addons-097644" [3bf8c5a4-9f46-4a38-8c40-03e649c1865a] Running
	I0127 14:07:12.718050 1013451 system_pods.go:61] "kube-controller-manager-addons-097644" [b91db1d0-e6e1-40f4-a230-9496ded8dfbc] Running
	I0127 14:07:12.718057 1013451 system_pods.go:61] "kube-ingress-dns-minikube" [f4e9fbe7-9f01-42c9-abd2-70a375dbf64b] Running
	I0127 14:07:12.718063 1013451 system_pods.go:61] "kube-proxy-f4zwd" [35fadf52-7154-403a-9e7c-d6efebab978e] Running
	I0127 14:07:12.718070 1013451 system_pods.go:61] "kube-scheduler-addons-097644" [64c5112b-77bd-466f-a1ed-e8f2c6512297] Running
	I0127 14:07:12.718076 1013451 system_pods.go:61] "metrics-server-7fbb699795-dr2kc" [d5f1b090-54ae-4efb-ade0-56f8442d821c] Running
	I0127 14:07:12.718082 1013451 system_pods.go:61] "nvidia-device-plugin-daemonset-bs6d4" [157addb8-6c2f-41d6-9d57-8ff984241b50] Running
	I0127 14:07:12.718088 1013451 system_pods.go:61] "registry-6c88467877-gs69t" [56ae8219-917b-43a3-8b3a-9965b018d7ae] Running
	I0127 14:07:12.718096 1013451 system_pods.go:61] "registry-proxy-68qft" [fcd36f1c-2ee6-49df-985c-78afd0b91e4b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0127 14:07:12.718107 1013451 system_pods.go:61] "snapshot-controller-68b874b76f-bncpk" [b196166f-4021-4337-a63b-54cb610bac71] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 14:07:12.718120 1013451 system_pods.go:61] "snapshot-controller-68b874b76f-pqf9k" [1173dcb4-3cf3-44b8-ae6f-7c755536337d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 14:07:12.718127 1013451 system_pods.go:61] "storage-provisioner" [d68777c6-5f9e-44a6-b8cb-1c9f8ee105cf] Running
	I0127 14:07:12.718139 1013451 system_pods.go:74] duration metric: took 14.846764ms to wait for pod list to return data ...
	I0127 14:07:12.718153 1013451 default_sa.go:34] waiting for default service account to be created ...
	I0127 14:07:12.721126 1013451 default_sa.go:45] found service account: "default"
	I0127 14:07:12.721157 1013451 default_sa.go:55] duration metric: took 2.993622ms for default service account to be created ...
	I0127 14:07:12.721171 1013451 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 14:07:12.728179 1013451 system_pods.go:87] 18 kube-system pods found
	I0127 14:07:12.730708 1013451 system_pods.go:105] "amd-gpu-device-plugin-89xv2" [7b98e34d-687f-47aa-8a1f-b8c5c016e93e] Running
	I0127 14:07:12.730727 1013451 system_pods.go:105] "coredns-668d6bf9bc-f5h88" [f45297c4-5f83-45a6-9f30-d0b16d29ef1d] Running
	I0127 14:07:12.730738 1013451 system_pods.go:105] "csi-hostpath-attacher-0" [0e65ff6e-fdeb-4e47-a281-58d2846521dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0127 14:07:12.730748 1013451 system_pods.go:105] "csi-hostpath-resizer-0" [f4b69299-7108-4d71-a19f-c8640d4d9d7b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0127 14:07:12.730761 1013451 system_pods.go:105] "csi-hostpathplugin-8jql5" [cdb87938-f761-462d-aaf8-e4a74f0d8e7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 14:07:12.730773 1013451 system_pods.go:105] "etcd-addons-097644" [15355068-d7bd-4c15-8402-670f796142e0] Running
	I0127 14:07:12.730781 1013451 system_pods.go:105] "kube-apiserver-addons-097644" [3bf8c5a4-9f46-4a38-8c40-03e649c1865a] Running
	I0127 14:07:12.730787 1013451 system_pods.go:105] "kube-controller-manager-addons-097644" [b91db1d0-e6e1-40f4-a230-9496ded8dfbc] Running
	I0127 14:07:12.730794 1013451 system_pods.go:105] "kube-ingress-dns-minikube" [f4e9fbe7-9f01-42c9-abd2-70a375dbf64b] Running
	I0127 14:07:12.730798 1013451 system_pods.go:105] "kube-proxy-f4zwd" [35fadf52-7154-403a-9e7c-d6efebab978e] Running
	I0127 14:07:12.730802 1013451 system_pods.go:105] "kube-scheduler-addons-097644" [64c5112b-77bd-466f-a1ed-e8f2c6512297] Running
	I0127 14:07:12.730806 1013451 system_pods.go:105] "metrics-server-7fbb699795-dr2kc" [d5f1b090-54ae-4efb-ade0-56f8442d821c] Running
	I0127 14:07:12.730811 1013451 system_pods.go:105] "nvidia-device-plugin-daemonset-bs6d4" [157addb8-6c2f-41d6-9d57-8ff984241b50] Running
	I0127 14:07:12.730815 1013451 system_pods.go:105] "registry-6c88467877-gs69t" [56ae8219-917b-43a3-8b3a-9965b018d7ae] Running
	I0127 14:07:12.730821 1013451 system_pods.go:105] "registry-proxy-68qft" [fcd36f1c-2ee6-49df-985c-78afd0b91e4b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0127 14:07:12.730828 1013451 system_pods.go:105] "snapshot-controller-68b874b76f-bncpk" [b196166f-4021-4337-a63b-54cb610bac71] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 14:07:12.730836 1013451 system_pods.go:105] "snapshot-controller-68b874b76f-pqf9k" [1173dcb4-3cf3-44b8-ae6f-7c755536337d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 14:07:12.730843 1013451 system_pods.go:105] "storage-provisioner" [d68777c6-5f9e-44a6-b8cb-1c9f8ee105cf] Running
	I0127 14:07:12.730852 1013451 system_pods.go:147] duration metric: took 9.674182ms to wait for k8s-apps to be running ...
	I0127 14:07:12.730866 1013451 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 14:07:12.730919 1013451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:07:12.776597 1013451 system_svc.go:56] duration metric: took 45.717863ms WaitForService to wait for kubelet
	I0127 14:07:12.776634 1013451 kubeadm.go:582] duration metric: took 41.974036194s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 14:07:12.776668 1013451 node_conditions.go:102] verifying NodePressure condition ...
	I0127 14:07:12.779895 1013451 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 14:07:12.779925 1013451 node_conditions.go:123] node cpu capacity is 2
	I0127 14:07:12.779937 1013451 node_conditions.go:105] duration metric: took 3.263578ms to run NodePressure ...
	I0127 14:07:12.779949 1013451 start.go:241] waiting for startup goroutines ...
	I0127 14:07:13.051978 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:13.052021 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:13.185783 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:13.206287 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:13.550709 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:13.551235 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:13.669317 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:13.701284 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:14.050846 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:14.051195 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:14.168756 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:14.202094 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:14.550255 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:14.551602 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:14.669317 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:14.701627 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:15.053046 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:15.053769 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:15.170995 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:15.203340 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:15.550746 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:15.551289 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:15.669797 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:15.702168 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:16.050144 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:16.050517 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:16.169356 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:16.201683 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:16.550953 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:16.551195 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:16.669784 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:16.702119 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:17.051144 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:17.051141 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:17.468098 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:17.469892 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:17.551344 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:17.551464 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:17.669038 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:17.702218 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:18.051797 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:18.052165 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:18.169400 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:18.202195 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:18.551843 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:18.552250 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:18.668610 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:18.701555 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:19.050623 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:19.051183 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:19.170878 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:19.201626 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:19.563323 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:19.565912 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:19.668974 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:19.702334 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:20.051931 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:20.052068 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:20.169838 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:20.201669 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:20.551529 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:20.551698 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:20.669152 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:20.701960 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:21.051433 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:21.051582 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:21.169879 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:21.201792 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:21.551317 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:21.551547 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:21.669135 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:21.701862 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:22.050599 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 14:07:22.050786 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:22.169800 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:22.201820 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:22.549984 1013451 kapi.go:107] duration metric: took 43.003916156s to wait for kubernetes.io/minikube-addons=registry ...
	I0127 14:07:22.550678 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:22.670404 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:22.701421 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:23.051144 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:23.169833 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:23.201769 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:23.550570 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:23.669457 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:23.701823 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:24.050614 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:24.169635 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:24.201972 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:24.549864 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:24.850060 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:24.850512 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:25.051285 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:25.168488 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:25.202049 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:25.550619 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:25.669472 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:25.701812 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:26.050499 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:26.169201 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:26.201034 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:26.550623 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:26.669459 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:26.702346 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:27.051287 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:27.169129 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:27.201158 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:27.551107 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:27.670129 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:27.702139 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:28.050633 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:28.169514 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:28.201745 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:28.549622 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:28.669711 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:28.701840 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:29.049926 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:29.169680 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:29.202737 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:29.550738 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:29.669967 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:29.701832 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:30.051104 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:30.169470 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:30.202270 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:30.550200 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:30.669788 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:30.701729 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:31.050315 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:31.169180 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:31.202245 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:31.550908 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:31.669616 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:31.701623 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:32.049918 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:32.169923 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:32.202237 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:32.550701 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:32.669164 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:32.701141 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:33.050480 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:33.168992 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:33.202153 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:33.550701 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:33.669874 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:33.702366 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:34.050511 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:34.169277 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:34.201418 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:34.550643 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:34.669531 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:34.701256 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:35.054928 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:35.169647 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:35.201868 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:35.549900 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:35.669754 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:35.701752 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:36.050017 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:36.169892 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:36.204020 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:36.551071 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:36.669899 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:36.701717 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:37.050081 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:37.169825 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:37.202223 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:37.550847 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:37.669530 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:37.701678 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:38.050063 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:38.169923 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:38.202463 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:38.549773 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:38.669659 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:38.701996 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:39.050495 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:39.169641 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:39.201887 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:39.550593 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:39.670566 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:39.702072 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:40.050380 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:40.169307 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:40.201420 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:40.550999 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:40.669715 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:40.701440 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:41.050230 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:41.168879 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:41.202325 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:41.550624 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:41.669747 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:41.701809 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:42.050493 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:42.169211 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:42.201520 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:42.550682 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:42.669305 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:42.701468 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:43.050555 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:43.169709 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:43.201742 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:43.550616 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:43.669985 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:43.702199 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:44.050462 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:44.168863 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:44.201969 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:44.550657 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:44.669862 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:44.702322 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:45.051337 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:45.169209 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:45.202025 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:45.550160 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:45.668972 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:45.701927 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:46.050307 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:46.168971 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:46.202059 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:46.551128 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:46.668578 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:46.702834 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:47.050852 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:47.169959 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:47.202008 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:47.551425 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:47.669309 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:47.701110 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:48.051016 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:48.169525 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:48.201587 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:48.550480 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:48.669034 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:48.702415 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:49.050601 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:49.168823 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:49.201585 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:49.550210 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:49.669046 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:49.701888 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:50.050296 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:50.169631 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:50.201503 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:50.551501 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:50.669281 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:50.702511 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:51.050900 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:51.169612 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:51.201816 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:51.552111 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:51.671918 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:51.702548 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:52.050260 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:52.168832 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:52.202188 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:52.550695 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:52.669650 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:52.702333 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:53.052245 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:53.169200 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:53.201611 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:53.550672 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:53.669444 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:53.701777 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:54.051130 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:54.168868 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:54.202046 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:54.550431 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:54.669306 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:54.701904 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:55.051015 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:55.170280 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:55.201214 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:55.553236 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:55.668853 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:55.702340 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:56.051092 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:56.169953 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:56.202452 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:56.551212 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:56.668750 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:56.702523 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:57.050964 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:57.169807 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:57.201803 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:57.550211 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:57.668876 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:57.707900 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:58.050191 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:58.168681 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:58.202039 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:58.550833 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:58.669610 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:58.701767 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:59.051468 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:59.169107 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:59.202715 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:07:59.551047 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:07:59.670592 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:07:59.701979 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:00.050778 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:00.169383 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:00.201834 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:00.551100 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:00.669963 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:00.771411 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:01.054273 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:01.169271 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:01.201602 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:01.550680 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:01.669283 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:01.701522 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:02.052977 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:02.169224 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:02.202291 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:02.550191 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:02.669159 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:02.701813 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:03.049670 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:03.198193 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:03.213735 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:03.551488 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:03.669126 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:03.704574 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:04.050148 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:04.169130 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:04.200961 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:04.550132 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:04.684815 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:04.702791 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:05.177951 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:05.178289 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:05.204849 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:05.551607 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:05.670725 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:05.708916 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:06.050874 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:06.172293 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:06.201971 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:06.551280 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:06.669334 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:06.701067 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:07.051436 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:07.169708 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:07.202011 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:07.552925 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:07.668863 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:07.701641 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:08.050688 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:08.168959 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:08.202195 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:08.550600 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:08.668882 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:08.702599 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:09.051177 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:09.168919 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:09.203167 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:09.550992 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:09.669419 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:09.701472 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:10.051368 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:10.169506 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:10.201966 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:10.923307 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:10.927584 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:10.927913 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:11.050639 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:11.170106 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:11.272444 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:11.552898 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:11.669527 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:11.701595 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:12.050322 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:12.168886 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:12.201829 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:12.550464 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:12.669150 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:12.771687 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:13.050505 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:13.169760 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:13.204975 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:13.551502 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:13.669335 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:13.701321 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:14.050505 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:14.170895 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:14.209305 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:14.550917 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:14.670374 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:14.703360 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:15.056811 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:15.170547 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:15.201903 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:15.551103 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:15.669672 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:15.701742 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:16.051467 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:16.169954 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:16.203694 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:16.551142 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:16.669768 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:16.702805 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:17.051501 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:17.169205 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:17.202951 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:17.551252 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:17.668660 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:17.701825 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:18.051434 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:18.171325 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:18.203909 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:18.551201 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:18.670054 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:18.702443 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:19.050156 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:19.468641 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:19.469516 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:19.550943 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:19.669264 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:19.759545 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 14:08:20.058136 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:20.170948 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:20.203636 1013451 kapi.go:107] duration metric: took 1m39.506848143s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0127 14:08:20.550335 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:20.668839 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:21.051466 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:21.169190 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:21.550095 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:21.668827 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:22.051580 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:22.169470 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:22.550664 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:22.669514 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:23.051018 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:23.169957 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:23.550439 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:23.669931 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:24.053965 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:24.169878 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:24.550387 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:24.669803 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:25.056975 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:25.172567 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:25.551153 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:25.670581 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:26.051385 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:26.169530 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:26.551217 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:26.669338 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:27.050638 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:27.170170 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:27.550781 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:27.669538 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:28.051621 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:28.169483 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:28.550676 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:28.669440 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:29.050516 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:29.169375 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:29.551751 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:29.669212 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:30.050939 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:30.169393 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:30.550455 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:30.669253 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:31.050996 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:31.170070 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:31.550206 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:31.668763 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:32.051626 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:32.169320 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:32.551069 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:32.669837 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:33.050330 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:33.168620 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:33.550910 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:33.670232 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:34.051832 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:34.169178 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:34.550237 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:34.668760 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:35.051600 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:35.168763 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:35.551988 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:35.669108 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:36.051060 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:36.170390 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:36.550794 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:36.670426 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:37.050690 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:37.169249 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:37.550576 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:37.669601 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:38.051570 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:38.169093 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:38.550515 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:38.669589 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:39.050556 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:39.169165 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:39.549996 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:39.669744 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:40.051936 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:40.169233 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:40.551315 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:40.669719 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:41.051496 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:41.169933 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:41.550270 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:41.669462 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:42.051430 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:42.169435 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:42.550648 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:42.669559 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:43.051075 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:43.170173 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:43.550411 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:43.669019 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:44.051147 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:44.169943 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:44.550616 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:44.669541 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:45.051936 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:45.169481 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:45.551946 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:45.669610 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:46.051573 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:46.169440 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:46.551239 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:46.669157 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:47.050473 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:47.169232 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:47.550542 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:47.669197 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:48.050628 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:48.169232 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:48.550646 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:48.669371 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:49.050350 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:49.168809 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:49.552159 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:49.668741 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:50.096074 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:50.194902 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:50.551924 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:50.669444 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:51.051559 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:51.169244 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:51.550779 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:51.669835 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:52.051039 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:52.170723 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:52.551544 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:52.669556 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:53.050634 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:53.169497 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:53.551283 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:53.670037 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:54.051147 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:54.170233 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:54.550184 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:54.669816 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:55.051429 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:55.169212 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:55.550803 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:55.668993 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:56.050841 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:56.169885 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:56.550306 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:56.670189 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:57.050387 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:57.170258 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:57.551101 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:57.669797 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:58.051185 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:58.170985 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:58.550560 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:58.676095 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:59.051442 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:59.169894 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:08:59.551564 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:08:59.670164 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:00.050493 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:09:00.170055 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:00.581252 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:09:00.780484 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:01.055777 1013451 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 14:09:01.174697 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:01.552096 1013451 kapi.go:107] duration metric: took 2m22.006221923s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0127 14:09:01.671070 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:02.169799 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:02.683707 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:03.169279 1013451 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 14:09:03.670330 1013451 kapi.go:107] duration metric: took 2m20.004881029s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0127 14:09:03.672423 1013451 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-097644 cluster.
	I0127 14:09:03.673752 1013451 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0127 14:09:03.675214 1013451 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0127 14:09:03.676891 1013451 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner-rancher, nvidia-device-plugin, amd-gpu-device-plugin, metrics-server, storage-provisioner, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0127 14:09:03.678180 1013451 addons.go:514] duration metric: took 2m32.875560916s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner-rancher nvidia-device-plugin amd-gpu-device-plugin metrics-server storage-provisioner inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0127 14:09:03.678236 1013451 start.go:246] waiting for cluster config update ...
	I0127 14:09:03.678259 1013451 start.go:255] writing updated cluster config ...
	I0127 14:09:03.678549 1013451 ssh_runner.go:195] Run: rm -f paused
	I0127 14:09:03.733995 1013451 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 14:09:03.735875 1013451 out.go:177] * Done! kubectl is now configured to use "addons-097644" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.711465520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987270711438437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36642ecc-0c27-4617-ada9-fc68df009d0f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.712340105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fff0c07b-b4b7-4795-abbe-e1fd76a50dd0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.712399706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fff0c07b-b4b7-4795-abbe-e1fd76a50dd0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.712984337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f81789dc13409c9c89abd809f5b52e2d832e26f91690d41ab03d14e0cc9d3e,PodSandboxId:5352d026f28eb9460b4dd0d0f512e27b3dc8581a39e7da219da2c6c2176d013e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737986947877151509,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0467110-2a34-4ee9-a43d-ff359ed55ac8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31c99b76a81dc6389be76b64f3ef0313d32e7b67b73abfe9d18279163fd6b43a,PodSandboxId:4b29b0e07759182fe6e91d2c67c8a6f765579226662ff94442fc003bd9b459b8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737986940974818963,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-nz5zf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8a084d-8bfd-4fe1-af66-150effc1def4,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.p
orts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fcf79af65e2f7ad903e2fc1428cdac9ca62e96e4b1719adfc6b9554c96fc10fe,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a10987
1454298c,State:CONTAINER_RUNNING,CreatedAt:1737986899554109765,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:068a6720098eadf4f4cc6bf5aaeb9c19235c6135427dcb6635ff3c3296348d66,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6
aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737986897257220941,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf0dc31a82587724a7299f886e953908ae98475a469ab6e2ccec29ff56aa02d,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737986895399396457,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb04775296fca86d00d58e7fc8e6e3f8cf1fcaf194273f66b566147fd5a53515,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737986894242485986,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b3e12efdae42c1c06ab45af4d83f13b32ca2928c06f617e42cce259207eabf,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-s
torage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737986892559604249,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42ee84642ee13587425b84e6c0bb87bf25c5095b74dbe1c4e3fe30c384e6b05,PodSandboxId:dc97cd3cc6432a2c8e83961efb3496f6002bc9963dc0894ea326ba3bfafcb0a5,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737986890998219686,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e65ff6e-fdeb-4e47-a281-58d2846521dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875b7791795d9089a384c42a3f48f7d8c73948964f347e89facfd0db7cb6d872,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata
{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737986888843226637,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7136df91c57dc2b505215a87f2db26c920982ab23199646e92baf8a6114742,
PodSandboxId:67873ca52686ef5f09d5803b960439ff2e9dff63fe57e4e9bc4ae7755a4c3252,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737986887279068126,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4b69299-7108-4d71-a19f-c8640d4d9d7b,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5b85488e514e0b9a70d6ad793832fc5bb440dd1e2
3119f7989c02aac92a0be,PodSandboxId:9f6d46661caff66f1c8478c624be9b9ee4cd73233b81554c51e040ef4ff9f134,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737986885581813198,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-bncpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b196166f-4021-4337-a63b-54cb610bac71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:58904f506013f542e9b231a6808678fd0573c3bd8cd952423087c0d6173cf906,PodSandboxId:331461d468a0287af39cdd2959fd770331d81d0a988e0b06742c072ed08529de,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986885438608437,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-bzwfx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 299a7a11-a7cd-47e8-a6e3-3bad249287e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd6697c28123fa723ad9e7b73bd9376412faac66eaa21c563ada2217e72ab04b,PodSandboxId:afd0e0bf3daf9323242cf3bf126cbca37788c8d89b388a951f2426ac862d252a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737986885280378193,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-pqf9k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1173dcb4-3cf3-44b8-ae6f-7c755536337d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9f1bf88ae46d82528c0e29a5bc57c290c29e5ead7bc5458611dfb34f3f9396,PodSandboxId:2b7094e2898b69c6870171d7731d4bbe785d99322137dfbdda62bbf372258d31,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986883193453747,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k6p8j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3601244e-0ff0-48fd-82e5-191697a50cc1,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:761f2e2c308db6554da26a68220fc010ca638b4c243d58e3a7bfd44b9ab8fa5b,PodSandboxId:f53250fc56d90066f9383c841bbfc4ba8b6c908ead3e11a68c12eac20eb4cf8a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1737986833337389314,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-rh4cz,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 289e1bd5-2864-4f7c-ba18-3d3de22a3bf1,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:623ee8fa394741d20631eadce0645de0c3077b41bda84ba16e69298ce2fb2aee,PodSandboxId:0a6270a918122fceb54c8234f272b5923906b77f97d500716b438d261d65f1b5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737986810137964591,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-89xv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b98e34d-687f-47aa-8a1f-b8c5c016e93e,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05863be1b9fa2047744ae57909fd013effd20461b13a53c0203548ae5163cc17,PodSandboxId:966718e37de57247bbe1778e18c2315e360be73508d87c193cbf3d064b76c59a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737986808183321140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e9
fbe7-9f01-42c9-abd2-70a375dbf64b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c8ab68a095e3f9a19ef0816cd7f0039760858f3eb4b4e8be7a466c8a3b5f2,PodSandboxId:a26522c3d4205d02491ecf32bd122e8d144ae7b6569272f6f8ee59695baccbf1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737986800989304931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage
-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68777c6-5f9e-44a6-b8cb-1c9f8ee105cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c916e18de1c73a60943849ae15c67f7ee646da8cba2b577f0bf51585b989079,PodSandboxId:548cc3bbe430b8930513f7f2f476e7c848cffcdece72034c5d9775b7f9bc1f65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737986794505629258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f5h88,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f45297c4-5f83-45a6-9f30-d0b16d29ef1d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90efac6917c68038d63a14118f3bb156f497e40e9ad17243eb081ca38d088f5,PodSandboxId:8b4984c018663d7e6ca920385de46f500231a3ebad824f063ff512bea4ada26e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd710216
1f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737986790983330349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fadf52-7154-403a-9e7c-d6efebab978e,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e0a45028148fd4e8391f8ec068ea1dc26781777b65595f0e77d40e5887b183,PodSandboxId:a8b62c040eb6fb2cb43d0aeb7e74ef69d37550776f341a2390e7c11b9d2376bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,C
reatedAt:1737986780282577344,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1baee3c5773302dcbf66e88a017664,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:507cc4bfd4bacc7171e8b29ab1c7cf6755a54cd4e33644a7e010521389c99e19,PodSandboxId:eb6ed8d17f58c3039433b49564b8b5f4564167c636d90e8870ec9ddacd177da7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737986780215195274,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e884d025cc58fee39959cedced04f6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97beecbf34e8cd79fb655f7ac780b4687619a6508d22de1fae488cf0049312,PodSandboxId:0c77accc1a4c11bee5b61d5981f6c394963573cda5b31141bd2aae673803805a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737986780205970791,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1bdeb768a2f70bc6bfa23146163e88,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:726cfe5819ce466e645fc14abb234442552bd6186cbb1f21a5e46f21de73c975,PodSandboxId:37576819d5068e04b4ea5a46a934bdae664ba9dce2b576c4f6549ea7360cf9f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737986780233143816,Labels:map[strin
g]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac465cf17a7fdee35cc7d0daf7e27a45,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fff0c07b-b4b7-4795-abbe-e1fd76a50dd0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.754413766Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d098f5be-6b22-4dfd-b66a-d42853259ea6 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.754506439Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d098f5be-6b22-4dfd-b66a-d42853259ea6 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.755805732Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d82cc431-8bfb-4cf6-b7c2-abd151d3575c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.757042213Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987270757016966,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d82cc431-8bfb-4cf6-b7c2-abd151d3575c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.757671710Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb9385cd-7c76-49de-9927-d5c641d39d50 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.757724040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb9385cd-7c76-49de-9927-d5c641d39d50 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.758980174Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f81789dc13409c9c89abd809f5b52e2d832e26f91690d41ab03d14e0cc9d3e,PodSandboxId:5352d026f28eb9460b4dd0d0f512e27b3dc8581a39e7da219da2c6c2176d013e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737986947877151509,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0467110-2a34-4ee9-a43d-ff359ed55ac8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31c99b76a81dc6389be76b64f3ef0313d32e7b67b73abfe9d18279163fd6b43a,PodSandboxId:4b29b0e07759182fe6e91d2c67c8a6f765579226662ff94442fc003bd9b459b8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737986940974818963,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-nz5zf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8a084d-8bfd-4fe1-af66-150effc1def4,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.p
orts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fcf79af65e2f7ad903e2fc1428cdac9ca62e96e4b1719adfc6b9554c96fc10fe,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a10987
1454298c,State:CONTAINER_RUNNING,CreatedAt:1737986899554109765,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:068a6720098eadf4f4cc6bf5aaeb9c19235c6135427dcb6635ff3c3296348d66,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6
aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737986897257220941,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf0dc31a82587724a7299f886e953908ae98475a469ab6e2ccec29ff56aa02d,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737986895399396457,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb04775296fca86d00d58e7fc8e6e3f8cf1fcaf194273f66b566147fd5a53515,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737986894242485986,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b3e12efdae42c1c06ab45af4d83f13b32ca2928c06f617e42cce259207eabf,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-s
torage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737986892559604249,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42ee84642ee13587425b84e6c0bb87bf25c5095b74dbe1c4e3fe30c384e6b05,PodSandboxId:dc97cd3cc6432a2c8e83961efb3496f6002bc9963dc0894ea326ba3bfafcb0a5,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737986890998219686,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e65ff6e-fdeb-4e47-a281-58d2846521dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875b7791795d9089a384c42a3f48f7d8c73948964f347e89facfd0db7cb6d872,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata
{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737986888843226637,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7136df91c57dc2b505215a87f2db26c920982ab23199646e92baf8a6114742,
PodSandboxId:67873ca52686ef5f09d5803b960439ff2e9dff63fe57e4e9bc4ae7755a4c3252,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737986887279068126,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4b69299-7108-4d71-a19f-c8640d4d9d7b,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5b85488e514e0b9a70d6ad793832fc5bb440dd1e2
3119f7989c02aac92a0be,PodSandboxId:9f6d46661caff66f1c8478c624be9b9ee4cd73233b81554c51e040ef4ff9f134,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737986885581813198,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-bncpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b196166f-4021-4337-a63b-54cb610bac71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:58904f506013f542e9b231a6808678fd0573c3bd8cd952423087c0d6173cf906,PodSandboxId:331461d468a0287af39cdd2959fd770331d81d0a988e0b06742c072ed08529de,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986885438608437,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-bzwfx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 299a7a11-a7cd-47e8-a6e3-3bad249287e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd6697c28123fa723ad9e7b73bd9376412faac66eaa21c563ada2217e72ab04b,PodSandboxId:afd0e0bf3daf9323242cf3bf126cbca37788c8d89b388a951f2426ac862d252a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737986885280378193,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-pqf9k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1173dcb4-3cf3-44b8-ae6f-7c755536337d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9f1bf88ae46d82528c0e29a5bc57c290c29e5ead7bc5458611dfb34f3f9396,PodSandboxId:2b7094e2898b69c6870171d7731d4bbe785d99322137dfbdda62bbf372258d31,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986883193453747,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k6p8j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3601244e-0ff0-48fd-82e5-191697a50cc1,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:761f2e2c308db6554da26a68220fc010ca638b4c243d58e3a7bfd44b9ab8fa5b,PodSandboxId:f53250fc56d90066f9383c841bbfc4ba8b6c908ead3e11a68c12eac20eb4cf8a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1737986833337389314,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-rh4cz,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 289e1bd5-2864-4f7c-ba18-3d3de22a3bf1,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:623ee8fa394741d20631eadce0645de0c3077b41bda84ba16e69298ce2fb2aee,PodSandboxId:0a6270a918122fceb54c8234f272b5923906b77f97d500716b438d261d65f1b5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737986810137964591,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-89xv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b98e34d-687f-47aa-8a1f-b8c5c016e93e,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05863be1b9fa2047744ae57909fd013effd20461b13a53c0203548ae5163cc17,PodSandboxId:966718e37de57247bbe1778e18c2315e360be73508d87c193cbf3d064b76c59a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737986808183321140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e9
fbe7-9f01-42c9-abd2-70a375dbf64b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c8ab68a095e3f9a19ef0816cd7f0039760858f3eb4b4e8be7a466c8a3b5f2,PodSandboxId:a26522c3d4205d02491ecf32bd122e8d144ae7b6569272f6f8ee59695baccbf1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737986800989304931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage
-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68777c6-5f9e-44a6-b8cb-1c9f8ee105cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c916e18de1c73a60943849ae15c67f7ee646da8cba2b577f0bf51585b989079,PodSandboxId:548cc3bbe430b8930513f7f2f476e7c848cffcdece72034c5d9775b7f9bc1f65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737986794505629258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f5h88,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f45297c4-5f83-45a6-9f30-d0b16d29ef1d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90efac6917c68038d63a14118f3bb156f497e40e9ad17243eb081ca38d088f5,PodSandboxId:8b4984c018663d7e6ca920385de46f500231a3ebad824f063ff512bea4ada26e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd710216
1f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737986790983330349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fadf52-7154-403a-9e7c-d6efebab978e,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e0a45028148fd4e8391f8ec068ea1dc26781777b65595f0e77d40e5887b183,PodSandboxId:a8b62c040eb6fb2cb43d0aeb7e74ef69d37550776f341a2390e7c11b9d2376bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,C
reatedAt:1737986780282577344,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1baee3c5773302dcbf66e88a017664,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:507cc4bfd4bacc7171e8b29ab1c7cf6755a54cd4e33644a7e010521389c99e19,PodSandboxId:eb6ed8d17f58c3039433b49564b8b5f4564167c636d90e8870ec9ddacd177da7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737986780215195274,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e884d025cc58fee39959cedced04f6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97beecbf34e8cd79fb655f7ac780b4687619a6508d22de1fae488cf0049312,PodSandboxId:0c77accc1a4c11bee5b61d5981f6c394963573cda5b31141bd2aae673803805a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737986780205970791,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1bdeb768a2f70bc6bfa23146163e88,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:726cfe5819ce466e645fc14abb234442552bd6186cbb1f21a5e46f21de73c975,PodSandboxId:37576819d5068e04b4ea5a46a934bdae664ba9dce2b576c4f6549ea7360cf9f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737986780233143816,Labels:map[strin
g]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac465cf17a7fdee35cc7d0daf7e27a45,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb9385cd-7c76-49de-9927-d5c641d39d50 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.801719158Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5670e18b-9411-48f1-8803-a74f414bbbbc name=/runtime.v1.RuntimeService/Version
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.801904382Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5670e18b-9411-48f1-8803-a74f414bbbbc name=/runtime.v1.RuntimeService/Version
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.804515142Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e082695-7dc0-4910-81d3-bb019a97f681 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.805766367Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987270805737350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e082695-7dc0-4910-81d3-bb019a97f681 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.806618816Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6a08e34-57d8-4934-a504-6515f66f3cd1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.806740369Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6a08e34-57d8-4934-a504-6515f66f3cd1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.807962761Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f81789dc13409c9c89abd809f5b52e2d832e26f91690d41ab03d14e0cc9d3e,PodSandboxId:5352d026f28eb9460b4dd0d0f512e27b3dc8581a39e7da219da2c6c2176d013e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737986947877151509,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0467110-2a34-4ee9-a43d-ff359ed55ac8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31c99b76a81dc6389be76b64f3ef0313d32e7b67b73abfe9d18279163fd6b43a,PodSandboxId:4b29b0e07759182fe6e91d2c67c8a6f765579226662ff94442fc003bd9b459b8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737986940974818963,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-nz5zf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8a084d-8bfd-4fe1-af66-150effc1def4,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.p
orts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fcf79af65e2f7ad903e2fc1428cdac9ca62e96e4b1719adfc6b9554c96fc10fe,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a10987
1454298c,State:CONTAINER_RUNNING,CreatedAt:1737986899554109765,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:068a6720098eadf4f4cc6bf5aaeb9c19235c6135427dcb6635ff3c3296348d66,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6
aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737986897257220941,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf0dc31a82587724a7299f886e953908ae98475a469ab6e2ccec29ff56aa02d,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737986895399396457,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb04775296fca86d00d58e7fc8e6e3f8cf1fcaf194273f66b566147fd5a53515,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737986894242485986,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b3e12efdae42c1c06ab45af4d83f13b32ca2928c06f617e42cce259207eabf,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-s
torage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737986892559604249,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42ee84642ee13587425b84e6c0bb87bf25c5095b74dbe1c4e3fe30c384e6b05,PodSandboxId:dc97cd3cc6432a2c8e83961efb3496f6002bc9963dc0894ea326ba3bfafcb0a5,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737986890998219686,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e65ff6e-fdeb-4e47-a281-58d2846521dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875b7791795d9089a384c42a3f48f7d8c73948964f347e89facfd0db7cb6d872,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata
{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737986888843226637,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7136df91c57dc2b505215a87f2db26c920982ab23199646e92baf8a6114742,
PodSandboxId:67873ca52686ef5f09d5803b960439ff2e9dff63fe57e4e9bc4ae7755a4c3252,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737986887279068126,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4b69299-7108-4d71-a19f-c8640d4d9d7b,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5b85488e514e0b9a70d6ad793832fc5bb440dd1e2
3119f7989c02aac92a0be,PodSandboxId:9f6d46661caff66f1c8478c624be9b9ee4cd73233b81554c51e040ef4ff9f134,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737986885581813198,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-bncpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b196166f-4021-4337-a63b-54cb610bac71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:58904f506013f542e9b231a6808678fd0573c3bd8cd952423087c0d6173cf906,PodSandboxId:331461d468a0287af39cdd2959fd770331d81d0a988e0b06742c072ed08529de,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986885438608437,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-bzwfx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 299a7a11-a7cd-47e8-a6e3-3bad249287e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd6697c28123fa723ad9e7b73bd9376412faac66eaa21c563ada2217e72ab04b,PodSandboxId:afd0e0bf3daf9323242cf3bf126cbca37788c8d89b388a951f2426ac862d252a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737986885280378193,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-pqf9k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1173dcb4-3cf3-44b8-ae6f-7c755536337d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9f1bf88ae46d82528c0e29a5bc57c290c29e5ead7bc5458611dfb34f3f9396,PodSandboxId:2b7094e2898b69c6870171d7731d4bbe785d99322137dfbdda62bbf372258d31,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986883193453747,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k6p8j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3601244e-0ff0-48fd-82e5-191697a50cc1,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:761f2e2c308db6554da26a68220fc010ca638b4c243d58e3a7bfd44b9ab8fa5b,PodSandboxId:f53250fc56d90066f9383c841bbfc4ba8b6c908ead3e11a68c12eac20eb4cf8a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1737986833337389314,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-rh4cz,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 289e1bd5-2864-4f7c-ba18-3d3de22a3bf1,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:623ee8fa394741d20631eadce0645de0c3077b41bda84ba16e69298ce2fb2aee,PodSandboxId:0a6270a918122fceb54c8234f272b5923906b77f97d500716b438d261d65f1b5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737986810137964591,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-89xv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b98e34d-687f-47aa-8a1f-b8c5c016e93e,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05863be1b9fa2047744ae57909fd013effd20461b13a53c0203548ae5163cc17,PodSandboxId:966718e37de57247bbe1778e18c2315e360be73508d87c193cbf3d064b76c59a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737986808183321140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e9
fbe7-9f01-42c9-abd2-70a375dbf64b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c8ab68a095e3f9a19ef0816cd7f0039760858f3eb4b4e8be7a466c8a3b5f2,PodSandboxId:a26522c3d4205d02491ecf32bd122e8d144ae7b6569272f6f8ee59695baccbf1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737986800989304931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage
-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68777c6-5f9e-44a6-b8cb-1c9f8ee105cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c916e18de1c73a60943849ae15c67f7ee646da8cba2b577f0bf51585b989079,PodSandboxId:548cc3bbe430b8930513f7f2f476e7c848cffcdece72034c5d9775b7f9bc1f65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737986794505629258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f5h88,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f45297c4-5f83-45a6-9f30-d0b16d29ef1d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90efac6917c68038d63a14118f3bb156f497e40e9ad17243eb081ca38d088f5,PodSandboxId:8b4984c018663d7e6ca920385de46f500231a3ebad824f063ff512bea4ada26e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd710216
1f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737986790983330349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fadf52-7154-403a-9e7c-d6efebab978e,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e0a45028148fd4e8391f8ec068ea1dc26781777b65595f0e77d40e5887b183,PodSandboxId:a8b62c040eb6fb2cb43d0aeb7e74ef69d37550776f341a2390e7c11b9d2376bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,C
reatedAt:1737986780282577344,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1baee3c5773302dcbf66e88a017664,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:507cc4bfd4bacc7171e8b29ab1c7cf6755a54cd4e33644a7e010521389c99e19,PodSandboxId:eb6ed8d17f58c3039433b49564b8b5f4564167c636d90e8870ec9ddacd177da7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737986780215195274,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e884d025cc58fee39959cedced04f6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97beecbf34e8cd79fb655f7ac780b4687619a6508d22de1fae488cf0049312,PodSandboxId:0c77accc1a4c11bee5b61d5981f6c394963573cda5b31141bd2aae673803805a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737986780205970791,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1bdeb768a2f70bc6bfa23146163e88,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:726cfe5819ce466e645fc14abb234442552bd6186cbb1f21a5e46f21de73c975,PodSandboxId:37576819d5068e04b4ea5a46a934bdae664ba9dce2b576c4f6549ea7360cf9f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737986780233143816,Labels:map[strin
g]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac465cf17a7fdee35cc7d0daf7e27a45,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6a08e34-57d8-4934-a504-6515f66f3cd1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.848474584Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b69df9f1-e475-4725-b823-de8fcbdf40b1 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.848564928Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b69df9f1-e475-4725-b823-de8fcbdf40b1 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.849833743Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=175b72f7-9a7d-4817-87b6-274191b9a501 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.851384316Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987270851323405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=175b72f7-9a7d-4817-87b6-274191b9a501 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.852033201Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a6c26f8-89cf-4514-b38e-a94a3a9d77e4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.852110300Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a6c26f8-89cf-4514-b38e-a94a3a9d77e4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:14:30 addons-097644 crio[657]: time="2025-01-27 14:14:30.852602411Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1f81789dc13409c9c89abd809f5b52e2d832e26f91690d41ab03d14e0cc9d3e,PodSandboxId:5352d026f28eb9460b4dd0d0f512e27b3dc8581a39e7da219da2c6c2176d013e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737986947877151509,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0467110-2a34-4ee9-a43d-ff359ed55ac8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31c99b76a81dc6389be76b64f3ef0313d32e7b67b73abfe9d18279163fd6b43a,PodSandboxId:4b29b0e07759182fe6e91d2c67c8a6f765579226662ff94442fc003bd9b459b8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737986940974818963,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-nz5zf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a8a084d-8bfd-4fe1-af66-150effc1def4,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.p
orts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fcf79af65e2f7ad903e2fc1428cdac9ca62e96e4b1719adfc6b9554c96fc10fe,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a10987
1454298c,State:CONTAINER_RUNNING,CreatedAt:1737986899554109765,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:068a6720098eadf4f4cc6bf5aaeb9c19235c6135427dcb6635ff3c3296348d66,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6
aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737986897257220941,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf0dc31a82587724a7299f886e953908ae98475a469ab6e2ccec29ff56aa02d,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737986895399396457,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb04775296fca86d00d58e7fc8e6e3f8cf1fcaf194273f66b566147fd5a53515,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737986894242485986,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83b3e12efdae42c1c06ab45af4d83f13b32ca2928c06f617e42cce259207eabf,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-s
torage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737986892559604249,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42ee84642ee13587425b84e6c0bb87bf25c5095b74dbe1c4e3fe30c384e6b05,PodSandboxId:dc97cd3cc6432a2c8e83961efb3496f6002bc9963dc0894ea326ba3bfafcb0a5,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737986890998219686,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e65ff6e-fdeb-4e47-a281-58d2846521dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:875b7791795d9089a384c42a3f48f7d8c73948964f347e89facfd0db7cb6d872,PodSandboxId:c2c0fbe716ada94ae1d2d787f97720ed31d1b8a7a939d9ec42696d70ac0f430b,Metadata:&ContainerMetadata
{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737986888843226637,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-8jql5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdb87938-f761-462d-aaf8-e4a74f0d8e7e,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7136df91c57dc2b505215a87f2db26c920982ab23199646e92baf8a6114742,
PodSandboxId:67873ca52686ef5f09d5803b960439ff2e9dff63fe57e4e9bc4ae7755a4c3252,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737986887279068126,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4b69299-7108-4d71-a19f-c8640d4d9d7b,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5b85488e514e0b9a70d6ad793832fc5bb440dd1e2
3119f7989c02aac92a0be,PodSandboxId:9f6d46661caff66f1c8478c624be9b9ee4cd73233b81554c51e040ef4ff9f134,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737986885581813198,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-bncpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b196166f-4021-4337-a63b-54cb610bac71,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:58904f506013f542e9b231a6808678fd0573c3bd8cd952423087c0d6173cf906,PodSandboxId:331461d468a0287af39cdd2959fd770331d81d0a988e0b06742c072ed08529de,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986885438608437,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-bzwfx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 299a7a11-a7cd-47e8-a6e3-3bad249287e8,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd6697c28123fa723ad9e7b73bd9376412faac66eaa21c563ada2217e72ab04b,PodSandboxId:afd0e0bf3daf9323242cf3bf126cbca37788c8d89b388a951f2426ac862d252a,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737986885280378193,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-pqf9k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1173dcb4-3cf3-44b8-ae6f-7c755536337d,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9f1bf88ae46d82528c0e29a5bc57c290c29e5ead7bc5458611dfb34f3f9396,PodSandboxId:2b7094e2898b69c6870171d7731d4bbe785d99322137dfbdda62bbf372258d31,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737986883193453747,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-k6p8j,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3601244e-0ff0-48fd-82e5-191697a50cc1,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:761f2e2c308db6554da26a68220fc010ca638b4c243d58e3a7bfd44b9ab8fa5b,PodSandboxId:f53250fc56d90066f9383c841bbfc4ba8b6c908ead3e11a68c12eac20eb4cf8a,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1737986833337389314,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-rh4cz,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 289e1bd5-2864-4f7c-ba18-3d3de22a3bf1,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:623ee8fa394741d20631eadce0645de0c3077b41bda84ba16e69298ce2fb2aee,PodSandboxId:0a6270a918122fceb54c8234f272b5923906b77f97d500716b438d261d65f1b5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737986810137964591,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-89xv2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b98e34d-687f-47aa-8a1f-b8c5c016e93e,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05863be1b9fa2047744ae57909fd013effd20461b13a53c0203548ae5163cc17,PodSandboxId:966718e37de57247bbe1778e18c2315e360be73508d87c193cbf3d064b76c59a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737986808183321140,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e9
fbe7-9f01-42c9-abd2-70a375dbf64b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c8ab68a095e3f9a19ef0816cd7f0039760858f3eb4b4e8be7a466c8a3b5f2,PodSandboxId:a26522c3d4205d02491ecf32bd122e8d144ae7b6569272f6f8ee59695baccbf1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737986800989304931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage
-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68777c6-5f9e-44a6-b8cb-1c9f8ee105cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c916e18de1c73a60943849ae15c67f7ee646da8cba2b577f0bf51585b989079,PodSandboxId:548cc3bbe430b8930513f7f2f476e7c848cffcdece72034c5d9775b7f9bc1f65,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737986794505629258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-f5h88,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: f45297c4-5f83-45a6-9f30-d0b16d29ef1d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90efac6917c68038d63a14118f3bb156f497e40e9ad17243eb081ca38d088f5,PodSandboxId:8b4984c018663d7e6ca920385de46f500231a3ebad824f063ff512bea4ada26e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd710216
1f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737986790983330349,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f4zwd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35fadf52-7154-403a-9e7c-d6efebab978e,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5e0a45028148fd4e8391f8ec068ea1dc26781777b65595f0e77d40e5887b183,PodSandboxId:a8b62c040eb6fb2cb43d0aeb7e74ef69d37550776f341a2390e7c11b9d2376bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,C
reatedAt:1737986780282577344,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca1baee3c5773302dcbf66e88a017664,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:507cc4bfd4bacc7171e8b29ab1c7cf6755a54cd4e33644a7e010521389c99e19,PodSandboxId:eb6ed8d17f58c3039433b49564b8b5f4564167c636d90e8870ec9ddacd177da7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737986780215195274,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43e884d025cc58fee39959cedced04f6,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca97beecbf34e8cd79fb655f7ac780b4687619a6508d22de1fae488cf0049312,PodSandboxId:0c77accc1a4c11bee5b61d5981f6c394963573cda5b31141bd2aae673803805a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737986780205970791,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e1bdeb768a2f70bc6bfa23146163e88,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:726cfe5819ce466e645fc14abb234442552bd6186cbb1f21a5e46f21de73c975,PodSandboxId:37576819d5068e04b4ea5a46a934bdae664ba9dce2b576c4f6549ea7360cf9f4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737986780233143816,Labels:map[strin
g]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-097644,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac465cf17a7fdee35cc7d0daf7e27a45,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a6c26f8-89cf-4514-b38e-a94a3a9d77e4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	b1f81789dc134       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          5 minutes ago       Running             busybox                                  0                   5352d026f28eb       busybox
	31c99b76a81dc       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b                             5 minutes ago       Running             controller                               0                   4b29b0e077591       ingress-nginx-controller-56d7c84fd4-nz5zf
	fcf79af65e2f7       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   c2c0fbe716ada       csi-hostpathplugin-8jql5
	068a6720098ea       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          6 minutes ago       Running             csi-provisioner                          0                   c2c0fbe716ada       csi-hostpathplugin-8jql5
	3bf0dc31a8258       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            6 minutes ago       Running             liveness-probe                           0                   c2c0fbe716ada       csi-hostpathplugin-8jql5
	eb04775296fca       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           6 minutes ago       Running             hostpath                                 0                   c2c0fbe716ada       csi-hostpathplugin-8jql5
	83b3e12efdae4       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                6 minutes ago       Running             node-driver-registrar                    0                   c2c0fbe716ada       csi-hostpathplugin-8jql5
	f42ee84642ee1       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             6 minutes ago       Running             csi-attacher                             0                   dc97cd3cc6432       csi-hostpath-attacher-0
	875b7791795d9       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   6 minutes ago       Running             csi-external-health-monitor-controller   0                   c2c0fbe716ada       csi-hostpathplugin-8jql5
	ef7136df91c57       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              6 minutes ago       Running             csi-resizer                              0                   67873ca52686e       csi-hostpath-resizer-0
	2e5b85488e514       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago       Running             volume-snapshot-controller               0                   9f6d46661caff       snapshot-controller-68b874b76f-bncpk
	58904f506013f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f                   6 minutes ago       Exited              patch                                    0                   331461d468a02       ingress-nginx-admission-patch-bzwfx
	fd6697c28123f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago       Running             volume-snapshot-controller               0                   afd0e0bf3daf9       snapshot-controller-68b874b76f-pqf9k
	6c9f1bf88ae46       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f                   6 minutes ago       Exited              create                                   0                   2b7094e2898b6       ingress-nginx-admission-create-k6p8j
	761f2e2c308db       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             7 minutes ago       Running             local-path-provisioner                   0                   f53250fc56d90       local-path-provisioner-76f89f99b5-rh4cz
	623ee8fa39474       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     7 minutes ago       Running             amd-gpu-device-plugin                    0                   0a6270a918122       amd-gpu-device-plugin-89xv2
	05863be1b9fa2       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             7 minutes ago       Running             minikube-ingress-dns                     0                   966718e37de57       kube-ingress-dns-minikube
	d33c8ab68a095       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             7 minutes ago       Running             storage-provisioner                      0                   a26522c3d4205       storage-provisioner
	2c916e18de1c7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             7 minutes ago       Running             coredns                                  0                   548cc3bbe430b       coredns-668d6bf9bc-f5h88
	f90efac6917c6       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                                             7 minutes ago       Running             kube-proxy                               0                   8b4984c018663       kube-proxy-f4zwd
	c5e0a45028148       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                                             8 minutes ago       Running             etcd                                     0                   a8b62c040eb6f       etcd-addons-097644
	726cfe5819ce4       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                                             8 minutes ago       Running             kube-scheduler                           0                   37576819d5068       kube-scheduler-addons-097644
	507cc4bfd4bac       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                                             8 minutes ago       Running             kube-apiserver                           0                   eb6ed8d17f58c       kube-apiserver-addons-097644
	ca97beecbf34e       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                                             8 minutes ago       Running             kube-controller-manager                  0                   0c77accc1a4c1       kube-controller-manager-addons-097644
	
	
	==> coredns [2c916e18de1c73a60943849ae15c67f7ee646da8cba2b577f0bf51585b989079] <==
	[INFO] 10.244.0.8:34771 - 41457 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000328061s
	[INFO] 10.244.0.8:34771 - 47939 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000166673s
	[INFO] 10.244.0.8:34771 - 30775 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000224059s
	[INFO] 10.244.0.8:34771 - 16890 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000093275s
	[INFO] 10.244.0.8:34771 - 16011 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000165914s
	[INFO] 10.244.0.8:34771 - 48692 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000088562s
	[INFO] 10.244.0.8:34771 - 33081 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000156426s
	[INFO] 10.244.0.8:55120 - 55152 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000154188s
	[INFO] 10.244.0.8:55120 - 55445 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000200323s
	[INFO] 10.244.0.8:54848 - 11098 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079484s
	[INFO] 10.244.0.8:54848 - 10854 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000185223s
	[INFO] 10.244.0.8:52222 - 8992 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000065065s
	[INFO] 10.244.0.8:52222 - 8727 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000141435s
	[INFO] 10.244.0.8:35583 - 57125 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071096s
	[INFO] 10.244.0.8:35583 - 56925 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00025462s
	[INFO] 10.244.0.23:58183 - 7007 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00047367s
	[INFO] 10.244.0.23:56358 - 26808 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002454598s
	[INFO] 10.244.0.23:37519 - 11515 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000306136s
	[INFO] 10.244.0.23:56095 - 53118 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000073046s
	[INFO] 10.244.0.23:52826 - 17024 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000167726s
	[INFO] 10.244.0.23:58700 - 37913 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000072195s
	[INFO] 10.244.0.23:59320 - 25584 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001303055s
	[INFO] 10.244.0.23:59906 - 15774 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001635555s
	[INFO] 10.244.0.27:50450 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00056016s
	[INFO] 10.244.0.27:51006 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000141678s
	
	
	==> describe nodes <==
	Name:               addons-097644
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-097644
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743
	                    minikube.k8s.io/name=addons-097644
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T14_06_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-097644
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-097644"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 14:06:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-097644
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 14:14:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 14:10:30 +0000   Mon, 27 Jan 2025 14:06:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 14:10:30 +0000   Mon, 27 Jan 2025 14:06:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 14:10:30 +0000   Mon, 27 Jan 2025 14:06:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 14:10:30 +0000   Mon, 27 Jan 2025 14:06:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    addons-097644
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 53015ffc2749464aa9b7aa6eb16c09c0
	  System UUID:                53015ffc-2749-464a-a9b7-aa6eb16c09c0
	  Boot ID:                    b226972f-a6fa-415b-9827-3320ed4fb6de
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-nz5zf                     100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         7m52s
	  kube-system                 amd-gpu-device-plugin-89xv2                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	  kube-system                 coredns-668d6bf9bc-f5h88                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m1s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m51s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m51s
	  kube-system                 csi-hostpathplugin-8jql5                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m51s
	  kube-system                 etcd-addons-097644                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m6s
	  kube-system                 kube-apiserver-addons-097644                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m6s
	  kube-system                 kube-controller-manager-addons-097644                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m6s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m57s
	  kube-system                 kube-proxy-f4zwd                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  kube-system                 kube-scheduler-addons-097644                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m6s
	  kube-system                 snapshot-controller-68b874b76f-bncpk                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 snapshot-controller-68b874b76f-pqf9k                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m55s
	  local-path-storage          helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab    0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  local-path-storage          local-path-provisioner-76f89f99b5-rh4cz                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m59s  kube-proxy       
	  Normal  Starting                 8m6s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m6s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m6s   kubelet          Node addons-097644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m6s   kubelet          Node addons-097644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m6s   kubelet          Node addons-097644 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m4s   kubelet          Node addons-097644 status is now: NodeReady
	  Normal  RegisteredNode           8m2s   node-controller  Node addons-097644 event: Registered Node addons-097644 in Controller
	
	
	==> dmesg <==
	[  +0.090871] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.139758] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.148412] systemd-fstab-generator[1389]: Ignoring "noauto" option for root device
	[  +4.853975] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.047705] kauditd_printk_skb: 77 callbacks suppressed
	[  +5.422180] kauditd_printk_skb: 124 callbacks suppressed
	[Jan27 14:07] kauditd_printk_skb: 22 callbacks suppressed
	[ +10.435741] kauditd_printk_skb: 8 callbacks suppressed
	[ +16.990262] kauditd_printk_skb: 2 callbacks suppressed
	[Jan27 14:08] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.413265] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.243017] kauditd_printk_skb: 38 callbacks suppressed
	[Jan27 14:09] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.625061] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.938591] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.071460] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.141586] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.033258] kauditd_printk_skb: 36 callbacks suppressed
	[ +13.978501] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.866607] kauditd_printk_skb: 11 callbacks suppressed
	[Jan27 14:10] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.534292] kauditd_printk_skb: 3 callbacks suppressed
	[ +13.735780] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.706759] kauditd_printk_skb: 24 callbacks suppressed
	[Jan27 14:11] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [c5e0a45028148fd4e8391f8ec068ea1dc26781777b65595f0e77d40e5887b183] <==
	{"level":"info","ts":"2025-01-27T14:08:10.904125Z","caller":"traceutil/trace.go:171","msg":"trace[242124152] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1047; }","duration":"370.918398ms","start":"2025-01-27T14:08:10.533195Z","end":"2025-01-27T14:08:10.904114Z","steps":["trace[242124152] 'agreement among raft nodes before linearized reading'  (duration: 370.592929ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:08:10.904374Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T14:08:10.533183Z","time spent":"371.159759ms","remote":"127.0.0.1:48676","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-27T14:08:10.904722Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.976353ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:08:10.905648Z","caller":"traceutil/trace.go:171","msg":"trace[1424528345] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1047; }","duration":"221.920939ms","start":"2025-01-27T14:08:10.683718Z","end":"2025-01-27T14:08:10.905639Z","steps":["trace[1424528345] 'agreement among raft nodes before linearized reading'  (duration: 220.979628ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:08:10.904722Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"297.96691ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:08:10.906143Z","caller":"traceutil/trace.go:171","msg":"trace[918443162] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1047; }","duration":"299.34809ms","start":"2025-01-27T14:08:10.606727Z","end":"2025-01-27T14:08:10.906075Z","steps":["trace[918443162] 'agreement among raft nodes before linearized reading'  (duration: 297.968536ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:08:10.904817Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.575661ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:08:10.908832Z","caller":"traceutil/trace.go:171","msg":"trace[148756941] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1047; }","duration":"256.607672ms","start":"2025-01-27T14:08:10.652214Z","end":"2025-01-27T14:08:10.908821Z","steps":["trace[148756941] 'agreement among raft nodes before linearized reading'  (duration: 252.568435ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:08:19.448000Z","caller":"traceutil/trace.go:171","msg":"trace[656750930] linearizableReadLoop","detail":"{readStateIndex:1139; appliedIndex:1138; }","duration":"296.312018ms","start":"2025-01-27T14:08:19.151675Z","end":"2025-01-27T14:08:19.447987Z","steps":["trace[656750930] 'read index received'  (duration: 296.141594ms)","trace[656750930] 'applied index is now lower than readState.Index'  (duration: 169.942µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T14:08:19.448186Z","caller":"traceutil/trace.go:171","msg":"trace[868736163] transaction","detail":"{read_only:false; response_revision:1102; number_of_response:1; }","duration":"383.593879ms","start":"2025-01-27T14:08:19.064585Z","end":"2025-01-27T14:08:19.448179Z","steps":["trace[868736163] 'process raft request'  (duration: 383.321546ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:08:19.448344Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T14:08:19.064555Z","time spent":"383.668202ms","remote":"127.0.0.1:48734","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-097644\" mod_revision:1041 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-097644\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-097644\" > >"}
	{"level":"warn","ts":"2025-01-27T14:08:19.448623Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.485588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:08:19.449317Z","caller":"traceutil/trace.go:171","msg":"trace[684967347] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1102; }","duration":"266.209192ms","start":"2025-01-27T14:08:19.183097Z","end":"2025-01-27T14:08:19.449306Z","steps":["trace[684967347] 'agreement among raft nodes before linearized reading'  (duration: 265.481327ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:08:19.448655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"296.980294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:08:19.449472Z","caller":"traceutil/trace.go:171","msg":"trace[1855013821] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1102; }","duration":"297.811336ms","start":"2025-01-27T14:08:19.151651Z","end":"2025-01-27T14:08:19.449462Z","steps":["trace[1855013821] 'agreement among raft nodes before linearized reading'  (duration: 296.993016ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:09:00.558225Z","caller":"traceutil/trace.go:171","msg":"trace[1913945553] transaction","detail":"{read_only:false; response_revision:1172; number_of_response:1; }","duration":"241.852683ms","start":"2025-01-27T14:09:00.316354Z","end":"2025-01-27T14:09:00.558207Z","steps":["trace[1913945553] 'process raft request'  (duration: 241.733069ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:09:00.758982Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.118372ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:09:00.759127Z","caller":"traceutil/trace.go:171","msg":"trace[1498771159] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1172; }","duration":"109.340678ms","start":"2025-01-27T14:09:00.649774Z","end":"2025-01-27T14:09:00.759114Z","steps":["trace[1498771159] 'range keys from in-memory index tree'  (duration: 109.071803ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:09:46.406933Z","caller":"traceutil/trace.go:171","msg":"trace[1886057008] transaction","detail":"{read_only:false; response_revision:1428; number_of_response:1; }","duration":"194.14911ms","start":"2025-01-27T14:09:46.212756Z","end":"2025-01-27T14:09:46.406905Z","steps":["trace[1886057008] 'process raft request'  (duration: 193.987326ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:09:46.407441Z","caller":"traceutil/trace.go:171","msg":"trace[278748796] linearizableReadLoop","detail":"{readStateIndex:1488; appliedIndex:1488; }","duration":"179.099246ms","start":"2025-01-27T14:09:46.228323Z","end":"2025-01-27T14:09:46.407422Z","steps":["trace[278748796] 'read index received'  (duration: 179.093014ms)","trace[278748796] 'applied index is now lower than readState.Index'  (duration: 5.429µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T14:09:46.407629Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.267358ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab\" limit:1 ","response":"range_response_count:1 size:4006"}
	{"level":"info","ts":"2025-01-27T14:09:46.407673Z","caller":"traceutil/trace.go:171","msg":"trace[2015123404] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab; range_end:; response_count:1; response_revision:1428; }","duration":"179.426533ms","start":"2025-01-27T14:09:46.228236Z","end":"2025-01-27T14:09:46.407663Z","steps":["trace[2015123404] 'agreement among raft nodes before linearized reading'  (duration: 179.274245ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:10:20.372020Z","caller":"traceutil/trace.go:171","msg":"trace[308720478] linearizableReadLoop","detail":"{readStateIndex:1636; appliedIndex:1635; }","duration":"166.921538ms","start":"2025-01-27T14:10:20.205070Z","end":"2025-01-27T14:10:20.371992Z","steps":["trace[308720478] 'read index received'  (duration: 164.842263ms)","trace[308720478] 'applied index is now lower than readState.Index'  (duration: 2.078354ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T14:10:20.372181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.088702ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:10:20.372218Z","caller":"traceutil/trace.go:171","msg":"trace[543223298] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1566; }","duration":"167.165514ms","start":"2025-01-27T14:10:20.205047Z","end":"2025-01-27T14:10:20.372213Z","steps":["trace[543223298] 'agreement among raft nodes before linearized reading'  (duration: 167.085674ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:14:31 up 8 min,  0 users,  load average: 0.41, 0.63, 0.46
	Linux addons-097644 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [507cc4bfd4bacc7171e8b29ab1c7cf6755a54cd4e33644a7e010521389c99e19] <==
	I0127 14:06:38.603309       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 14:06:38.608746       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 14:06:39.196104       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.101.77.230"}
	I0127 14:06:39.247680       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.106.94.214"}
	I0127 14:06:39.310159       1 controller.go:615] quota admission added evaluator for: jobs.batch
	I0127 14:06:40.278552       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.103.89.29"}
	I0127 14:06:40.295443       1 controller.go:615] quota admission added evaluator for: statefulsets.apps
	I0127 14:06:40.551299       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.102.238.108"}
	I0127 14:06:43.094310       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.103.54.150"}
	W0127 14:07:12.374826       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 14:07:12.375556       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0127 14:07:12.376214       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.247.116:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.247.116:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.247.116:443: connect: connection refused" logger="UnhandledError"
	E0127 14:07:12.378540       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.247.116:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.247.116:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.247.116:443: connect: connection refused" logger="UnhandledError"
	I0127 14:07:12.446208       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0127 14:09:14.350334       1 conn.go:339] Error on socket receive: read tcp 192.168.39.228:8443->192.168.39.1:50822: use of closed network connection
	E0127 14:09:14.546345       1 conn.go:339] Error on socket receive: read tcp 192.168.39.228:8443->192.168.39.1:50860: use of closed network connection
	I0127 14:09:23.868341       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.143.116"}
	I0127 14:10:08.420199       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0127 14:10:09.465650       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0127 14:10:13.397817       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0127 14:10:13.989804       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0127 14:10:14.197521       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.220.0"}
	
	
	==> kube-controller-manager [ca97beecbf34e8cd79fb655f7ac780b4687619a6508d22de1fae488cf0049312] <==
	I0127 14:10:30.657295       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 14:10:30.657339       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 14:10:30.752371       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="addons-097644"
	I0127 14:10:36.681268       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-575dd5996b" duration="65.054µs"
	I0127 14:10:46.802388       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0127 14:10:51.258393       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 14:10:51.259462       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0127 14:10:51.260361       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 14:10:51.260407       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 14:11:36.085236       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 14:11:36.086513       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0127 14:11:36.087509       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 14:11:36.087554       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 14:12:13.715390       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 14:12:13.716583       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0127 14:12:13.717569       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 14:12:13.717639       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 14:12:52.915028       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 14:12:52.916434       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0127 14:12:52.917454       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 14:12:52.917531       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 14:13:52.773993       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 14:13:52.775235       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0127 14:13:52.776130       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 14:13:52.776208       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [f90efac6917c68038d63a14118f3bb156f497e40e9ad17243eb081ca38d088f5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 14:06:31.963275       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 14:06:31.979022       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.228"]
	E0127 14:06:31.979136       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 14:06:32.077913       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 14:06:32.077966       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 14:06:32.077989       1 server_linux.go:170] "Using iptables Proxier"
	I0127 14:06:32.084140       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 14:06:32.085000       1 server.go:497] "Version info" version="v1.32.1"
	I0127 14:06:32.085035       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:06:32.100525       1 config.go:199] "Starting service config controller"
	I0127 14:06:32.100558       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 14:06:32.100585       1 config.go:105] "Starting endpoint slice config controller"
	I0127 14:06:32.100589       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 14:06:32.101170       1 config.go:329] "Starting node config controller"
	I0127 14:06:32.101178       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 14:06:32.200914       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 14:06:32.200982       1 shared_informer.go:320] Caches are synced for service config
	I0127 14:06:32.201769       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [726cfe5819ce466e645fc14abb234442552bd6186cbb1f21a5e46f21de73c975] <==
	W0127 14:06:23.028289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 14:06:23.028514       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:23.028258       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 14:06:23.028527       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:23.832298       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 14:06:23.832354       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:23.890209       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 14:06:23.890242       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:23.952607       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 14:06:23.952764       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:24.012969       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 14:06:24.013220       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:24.013000       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 14:06:24.013543       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 14:06:24.051624       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 14:06:24.051685       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:24.102044       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 14:06:24.102173       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:24.130067       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 14:06:24.130122       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:24.176207       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 14:06:24.176269       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 14:06:24.284632       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 14:06:24.284687       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 14:06:26.404586       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 14:13:28 addons-097644 kubelet[1230]: E0127 14:13:28.805089    1230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e1bbc3eb-e3d8-4361-986a-7836ef9e6bac"
	Jan 27 14:13:36 addons-097644 kubelet[1230]: E0127 14:13:36.131280    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987216130618339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:13:36 addons-097644 kubelet[1230]: E0127 14:13:36.131329    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987216130618339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:13:46 addons-097644 kubelet[1230]: E0127 14:13:46.133405    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987226132969756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:13:46 addons-097644 kubelet[1230]: E0127 14:13:46.133800    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987226132969756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:13:51 addons-097644 kubelet[1230]: I0127 14:13:51.803810    1230 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jan 27 14:13:56 addons-097644 kubelet[1230]: E0127 14:13:56.136832    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987236136383914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:13:56 addons-097644 kubelet[1230]: E0127 14:13:56.136917    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987236136383914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:14:06 addons-097644 kubelet[1230]: E0127 14:14:06.139770    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987246139208432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:14:06 addons-097644 kubelet[1230]: E0127 14:14:06.140002    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987246139208432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:14:16 addons-097644 kubelet[1230]: E0127 14:14:16.143369    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987256142682082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:14:16 addons-097644 kubelet[1230]: E0127 14:14:16.143731    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987256142682082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:14:16 addons-097644 kubelet[1230]: E0127 14:14:16.663610    1230 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Jan 27 14:14:16 addons-097644 kubelet[1230]: E0127 14:14:16.663783    1230 kuberuntime_image.go:55] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Jan 27 14:14:16 addons-097644 kubelet[1230]: E0127 14:14:16.664437    1230 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:helper-pod,Image:docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79,Command:[/bin/sh /script/setup],Args:[-p /opt/local-path-provisioner/pvc-83479c15-7788-4f68-a4ed-f41471decbab_default_test-pvc -s 67108864 -m Filesystem],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:VOL_DIR,Value:/opt/local-path-provisioner/pvc-83479c15-7788-4f68-a4ed-f41471decbab_default_test-pvc,ValueFrom:nil,},EnvVar{Name:VOL_MODE,Value:Filesystem,ValueFrom:nil,},EnvVar{Name:VOL_SIZE_BYTES,Value:67108864,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:script,ReadOnly:false,MountPath:/script,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:data,ReadOnly:false,MountPath:/
opt/local-path-provisioner/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vrmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab_local-path-storage(56afd23a-3004-45f7-9a3b-5d120b588721): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the
limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jan 27 14:14:16 addons-097644 kubelet[1230]: E0127 14:14:16.666211    1230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab" podUID="56afd23a-3004-45f7-9a3b-5d120b588721"
	Jan 27 14:14:16 addons-097644 kubelet[1230]: I0127 14:14:16.803471    1230 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-89xv2" secret="" err="secret \"gcp-auth\" not found"
	Jan 27 14:14:17 addons-097644 kubelet[1230]: E0127 14:14:17.582258    1230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab" podUID="56afd23a-3004-45f7-9a3b-5d120b588721"
	Jan 27 14:14:25 addons-097644 kubelet[1230]: E0127 14:14:25.833198    1230 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 14:14:25 addons-097644 kubelet[1230]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 14:14:25 addons-097644 kubelet[1230]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 14:14:25 addons-097644 kubelet[1230]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 14:14:25 addons-097644 kubelet[1230]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 14:14:26 addons-097644 kubelet[1230]: E0127 14:14:26.146702    1230 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987266146136502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:14:26 addons-097644 kubelet[1230]: E0127 14:14:26.146751    1230 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987266146136502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506112,},InodesUsed:&UInt64Value{Value:181,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [d33c8ab68a095e3f9a19ef0816cd7f0039760858f3eb4b4e8be7a466c8a3b5f2] <==
	I0127 14:06:41.758709       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 14:06:41.803907       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 14:06:41.804042       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 14:06:41.825628       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 14:06:41.825800       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-097644_e66b6578-730a-4596-bb9a-5061db442ad5!
	I0127 14:06:41.826617       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"798f666d-0618-4e6e-9910-6786e4bc55d6", APIVersion:"v1", ResourceVersion:"778", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-097644_e66b6578-730a-4596-bb9a-5061db442ad5 became leader
	I0127 14:06:41.926306       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-097644_e66b6578-730a-4596-bb9a-5061db442ad5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-097644 -n addons-097644
helpers_test.go:261: (dbg) Run:  kubectl --context addons-097644 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-k6p8j ingress-nginx-admission-patch-bzwfx helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-097644 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-k6p8j ingress-nginx-admission-patch-bzwfx helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-097644 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-k6p8j ingress-nginx-admission-patch-bzwfx helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab: exit status 1 (87.314055ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-097644/192.168.39.228
	Start Time:       Mon, 27 Jan 2025 14:10:14 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hck28 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hck28:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m18s                default-scheduler  Successfully assigned default/nginx to addons-097644
	  Warning  Failed     77s (x2 over 3m3s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     77s (x2 over 3m3s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    64s (x2 over 3m3s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     64s (x2 over 3m3s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    53s (x3 over 4m18s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-097644/192.168.39.228
	Start Time:       Mon, 27 Jan 2025 14:09:54 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9vdzn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-9vdzn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m38s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-097644
	  Warning  Failed     108s (x2 over 3m34s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     108s (x2 over 3m34s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    93s (x2 over 3m33s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     93s (x2 over 3m33s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    79s (x3 over 4m38s)   kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xj65w (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-xj65w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-k6p8j" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bzwfx" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-097644 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-k6p8j ingress-nginx-admission-patch-bzwfx helper-pod-create-pvc-83479c15-7788-4f68-a4ed-f41471decbab: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-097644 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-097644 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (2m2.502091174s)
--- FAIL: TestAddons/parallel/LocalPath (425.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (188.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8134de1b-9f43-48b2-8405-d2306418e7bd] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004645048s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-354053 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-354053 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-354053 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-354053 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ebf05447-f691-4ade-96c9-00b397d988e5] Pending
helpers_test.go:344: "sp-pod" [ebf05447-f691-4ade-96c9-00b397d988e5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-354053 -n functional-354053
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-01-27 14:26:23.980825812 +0000 UTC m=+1255.653619963
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-354053 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-354053 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-354053/192.168.39.247
Start Time:       Mon, 27 Jan 2025 14:23:23 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:  10.244.0.9
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9t22r (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-9t22r:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  3m                   default-scheduler  Successfully assigned default/sp-pod to functional-354053
Warning  Failed     2m30s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     49s (x2 over 2m30s)  kubelet            Error: ErrImagePull
Warning  Failed     49s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    38s (x2 over 2m29s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     38s (x2 over 2m29s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    23s (x3 over 3m)     kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-354053 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-354053 logs sp-pod -n default: exit status 1 (68.248089ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-354053 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-354053 -n functional-354053
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-354053 logs -n 25: (1.608557548s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-354053 ssh stat                                               | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:23 UTC | 27 Jan 25 14:23 UTC |
	|                | /mount-9p/created-by-pod                                                 |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh sudo                                               | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:23 UTC | 27 Jan 25 14:23 UTC |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh findmnt                                            | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:23 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-354053                                                     | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:23 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdspecific-port3274950700/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh findmnt                                            | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:23 UTC | 27 Jan 25 14:23 UTC |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh -- ls                                              | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh sudo                                               | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC |                     |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-354053                                                     | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup23558551/001:/mount1     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh findmnt                                            | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC |                     |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-354053                                                     | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup23558551/001:/mount3     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-354053                                                     | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup23558551/001:/mount2     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh findmnt                                            | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh findmnt                                            | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh findmnt                                            | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-354053                                                     | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC |                     |
	|                | --kill=true                                                              |                   |         |         |                     |                     |
	| update-context | functional-354053                                                        | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-354053                                                        | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-354053                                                        | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| image          | functional-354053                                                        | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | image ls --format short                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-354053                                                        | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | image ls --format yaml                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh pgrep                                              | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC |                     |
	|                | buildkitd                                                                |                   |         |         |                     |                     |
	| image          | functional-354053 image build -t                                         | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | localhost/my-image:functional-354053                                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                         |                   |         |         |                     |                     |
	| image          | functional-354053 image ls                                               | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	| image          | functional-354053                                                        | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | image ls --format json                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-354053                                                        | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | image ls --format table                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:23:29
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:23:29.389880 1025700 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:23:29.390074 1025700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:23:29.390091 1025700 out.go:358] Setting ErrFile to fd 2...
	I0127 14:23:29.390098 1025700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:23:29.390528 1025700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 14:23:29.391414 1025700 out.go:352] Setting JSON to false
	I0127 14:23:29.392962 1025700 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":18356,"bootTime":1737969453,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:23:29.393100 1025700 start.go:139] virtualization: kvm guest
	I0127 14:23:29.395451 1025700 out.go:177] * [functional-354053] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0127 14:23:29.396819 1025700 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 14:23:29.396840 1025700 notify.go:220] Checking for updates...
	I0127 14:23:29.399233 1025700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:23:29.400589 1025700 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 14:23:29.401838 1025700 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 14:23:29.403108 1025700 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:23:29.404395 1025700 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:23:29.406227 1025700 config.go:182] Loaded profile config "functional-354053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:23:29.406818 1025700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:23:29.406888 1025700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:23:29.423313 1025700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37767
	I0127 14:23:29.423928 1025700 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:23:29.424678 1025700 main.go:141] libmachine: Using API Version  1
	I0127 14:23:29.424706 1025700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:23:29.425141 1025700 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:23:29.425319 1025700 main.go:141] libmachine: (functional-354053) Calling .DriverName
	I0127 14:23:29.425574 1025700 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:23:29.425916 1025700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:23:29.425955 1025700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:23:29.443236 1025700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0127 14:23:29.443854 1025700 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:23:29.444488 1025700 main.go:141] libmachine: Using API Version  1
	I0127 14:23:29.444505 1025700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:23:29.444866 1025700 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:23:29.445060 1025700 main.go:141] libmachine: (functional-354053) Calling .DriverName
	I0127 14:23:29.483470 1025700 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0127 14:23:29.484803 1025700 start.go:297] selected driver: kvm2
	I0127 14:23:29.484822 1025700 start.go:901] validating driver "kvm2" against &{Name:functional-354053 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-354053 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:23:29.484970 1025700 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:23:29.486981 1025700 out.go:201] 
	W0127 14:23:29.488336 1025700 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 14:23:29.489548 1025700 out.go:201] 
	
	
	==> CRI-O <==
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.821660649Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987984821631063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b4a9ff40-7eae-4b47-a8be-1aaeed0e82c1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.822168886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c336f3ba-dbae-4931-92a2-e1341ceb6a5d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.822329330Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c336f3ba-dbae-4931-92a2-e1341ceb6a5d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.822677087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ee37b4b3e1206b31c449040e80b73ec786c7575b837921bf1242cab7cb60b8b,PodSandboxId:664ccc8e4bf91776b49e3cc415822dd47cd971051cfc4e4db26eb67504cbd0af,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1737987904752678294,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-pbcgz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 31713492-60ad-4c3e-90c3-be2da436ea5c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e955bb9507d0eac7190c0da297cbc04480c7200392e8caeb8d1626977013ba,PodSandboxId:3b0bf1ef5fb97549c9fab9852517c20547cfb24132dd552ae9180ef9f5b3d4f5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737987902486883373,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-xhp74,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 704924aa-bf67-4686-8c97-b3118c26e0f0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd4e409d6eefdec47b888058c980cf8fb0cc0aa26f8edc81c5770ee098c8ac4,PodSandboxId:6d084a4e0a7562819fb18cd95399fa2293b8bccae340313ec95ca1ddc2244e26,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1737987836167444039,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28522223-1c9a-40dc-bf05-aba4db084b30,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2551b6c588eed8a3b8b23c06fc2ec4706de77d7c583f49c533e52091c1c49fa1,PodSandboxId:eb3f91549bacca05f1108f082b0674e027799122c3778424bc020587ae0fb1ab,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737987801555935295,Labels:map[string]string{io.kubernetes.container.name: echoserver,
io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-krgns,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 545ea35f-d082-4ec6-9395-474eb765ae58,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:088a91a7df0414c1a3a02771f28933f64953d9c2d6e9684858b3f0c40b33ab2a,PodSandboxId:ba0939bae8872239a4ff9833a4e95f955858d282a61a18ccfb830c26fb6c83d4,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737987801162275372,Labels:map[string]string{io.kubernetes.container.n
ame: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-bfvhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c90a774f-43ce-446b-a2ce-7cbbca5e3a7a,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d939964a23b588fc22557a3233a68b212e578d52c63d80d2e38c98cab05b3f6,PodSandboxId:5977cb792beeff187994617a69abc26c5d7c262af18a86619f94de40809e2efc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737987770235192340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernet
es.pod.name: kube-proxy-9lpvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d25c7a8-9825-48c9-aff4-4fb02fc71c7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20104e62b9307e210fd6e9aa7a97f74173dbd8adc8afe89de39953becacfe1c,PodSandboxId:06b99cc53ca9455b69e4ac729b8b5fd2f97000125e752415c050ac9eee2b4049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737987770225690694,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: st
orage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8134de1b-9f43-48b2-8405-d2306418e7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f24c678c7f9d9c39421a32f8418b4445c7fc92673ed181f5405eee86803e23b,PodSandboxId:605be2b4fca27d8f288ec1936c77ebcae88139fec3db8f4b03e12e58636bb746,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737987770257383331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-clss9,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: e6032c71-a225-4356-a049-706a54647858,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1be836193e750992cc427af8877b6210c3a50f284fa67527699cfb60e059a2b,PodSandboxId:ee8ad53c1b54aaad50b42fd57fbbc2746dc99d0b2c0a05df90ca4db7be9f4f96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe555639
4553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737987766843389042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5787c60ce7e171df80e78196dc5b8f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82364473179469ac74f8e047e8ae81e3af69d89fba5b26c969befa42e4933d5,PodSandboxId:79952744037ab04ab9ec610c549d1fe95dad8d65b4560d54537ce60c0718f773,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,
State:CONTAINER_RUNNING,CreatedAt:1737987766566078340,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ba205bcbaa29776e909c62b24a71b0,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52e2ad3382187b41e54263d72ba6dab790425fdb473a31cb2ecded83e9dc2f7,PodSandboxId:d793b58819561a0fa29dbf49acc0bee94637f80d36f883a528c915b77c80bf9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,
CreatedAt:1737987766538550016,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c422b2dd8f3b2fd158b86d54699f7a17,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc44fda3c14169ac204404e7003d53f7bd7782f90e33465e1630238f20b39826,PodSandboxId:2873370561f95655e7421a19912654d46f41e4a5fa328c1ff2f18a9510ffef4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNN
ING,CreatedAt:1737987766547779949,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b75a45a5acc0196c4b6709b05ee255d,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2f4f2849cce35fc41f756c6da674d8621635d2ff72754445d0651b349cd217,PodSandboxId:3a721b6e824c280553e8625bd73c50ff912566f4dff5007da9a973ca13965652,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737987
726930717881,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-clss9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6032c71-a225-4356-a049-706a54647858,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2294b1739725ad9aaed8938344fee7686c22d5e688d1c033dc314c65e2796ee7,PodSandboxId:dfb43d8b52480b6eb41c8e14de42c2d60c925c84051236068d60a5ddc601341b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867
d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737987726554040675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8134de1b-9f43-48b2-8405-d2306418e7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ea23fa5210bf04e54af9d9d70ee2db01c60d9b63ceef518f550c79358139c4,PodSandboxId:1b922d323340c2ae0af3fb932ae80735c7b393b731c39f27e4c6d31fbe23a4e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737987726519426974,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9lpvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d25c7a8-9825-48c9-aff4-4fb02fc71c7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edf77131a4418be5a48102c48f46cb91e7bc7d3081be0b382ca56fe73bc8ef5,PodSandboxId:93ff3e8cb8c68ff8d1544c4cfef99cccd42e58685ac5ea87d00a3dedb4a80cde,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737987722778400273,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ba205bcbaa29776e909c62b24a71b0,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc6e9a63f79fd3daaa4d44755466539f94dfc050b0db35c1a36fe4f00d97d8e8,PodSandboxId:7b0978f2754daf3f1253c9337cc9c54822f732d9e2b22ce505dd72f9cd985295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737987722742566957,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c422b2dd8f3b2fd158b86d54699f7a17,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dded0a0d154ae5b44fd7d73dcb5d3850e90162a1058d78c33d04cd9b6b92e8d,PodSandboxId:7f55af87ce837f53754cb3a676e67ac0a71d72a879155c8c95dfa6fab05d5dc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737987722742610386,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b75a45a5acc0196c4b6709b05ee255d,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c336f3ba-dbae-4931-92a2-e1341ceb6a5d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.866691725Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=385e6a01-8761-4497-9012-77888eefe3c1 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.866758797Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=385e6a01-8761-4497-9012-77888eefe3c1 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.868632264Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b40761bc-8a96-4e79-b42b-f3c7b09b561e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.869339040Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987984869316883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b40761bc-8a96-4e79-b42b-f3c7b09b561e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.870466677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d405929a-3d41-49ca-9a98-1ccbb91731e1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.870544657Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d405929a-3d41-49ca-9a98-1ccbb91731e1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.870953815Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ee37b4b3e1206b31c449040e80b73ec786c7575b837921bf1242cab7cb60b8b,PodSandboxId:664ccc8e4bf91776b49e3cc415822dd47cd971051cfc4e4db26eb67504cbd0af,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1737987904752678294,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-pbcgz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 31713492-60ad-4c3e-90c3-be2da436ea5c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e955bb9507d0eac7190c0da297cbc04480c7200392e8caeb8d1626977013ba,PodSandboxId:3b0bf1ef5fb97549c9fab9852517c20547cfb24132dd552ae9180ef9f5b3d4f5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737987902486883373,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-xhp74,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 704924aa-bf67-4686-8c97-b3118c26e0f0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd4e409d6eefdec47b888058c980cf8fb0cc0aa26f8edc81c5770ee098c8ac4,PodSandboxId:6d084a4e0a7562819fb18cd95399fa2293b8bccae340313ec95ca1ddc2244e26,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1737987836167444039,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28522223-1c9a-40dc-bf05-aba4db084b30,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2551b6c588eed8a3b8b23c06fc2ec4706de77d7c583f49c533e52091c1c49fa1,PodSandboxId:eb3f91549bacca05f1108f082b0674e027799122c3778424bc020587ae0fb1ab,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737987801555935295,Labels:map[string]string{io.kubernetes.container.name: echoserver,
io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-krgns,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 545ea35f-d082-4ec6-9395-474eb765ae58,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:088a91a7df0414c1a3a02771f28933f64953d9c2d6e9684858b3f0c40b33ab2a,PodSandboxId:ba0939bae8872239a4ff9833a4e95f955858d282a61a18ccfb830c26fb6c83d4,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737987801162275372,Labels:map[string]string{io.kubernetes.container.n
ame: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-bfvhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c90a774f-43ce-446b-a2ce-7cbbca5e3a7a,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d939964a23b588fc22557a3233a68b212e578d52c63d80d2e38c98cab05b3f6,PodSandboxId:5977cb792beeff187994617a69abc26c5d7c262af18a86619f94de40809e2efc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737987770235192340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernet
es.pod.name: kube-proxy-9lpvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d25c7a8-9825-48c9-aff4-4fb02fc71c7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20104e62b9307e210fd6e9aa7a97f74173dbd8adc8afe89de39953becacfe1c,PodSandboxId:06b99cc53ca9455b69e4ac729b8b5fd2f97000125e752415c050ac9eee2b4049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737987770225690694,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: st
orage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8134de1b-9f43-48b2-8405-d2306418e7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f24c678c7f9d9c39421a32f8418b4445c7fc92673ed181f5405eee86803e23b,PodSandboxId:605be2b4fca27d8f288ec1936c77ebcae88139fec3db8f4b03e12e58636bb746,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737987770257383331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-clss9,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: e6032c71-a225-4356-a049-706a54647858,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1be836193e750992cc427af8877b6210c3a50f284fa67527699cfb60e059a2b,PodSandboxId:ee8ad53c1b54aaad50b42fd57fbbc2746dc99d0b2c0a05df90ca4db7be9f4f96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe555639
4553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737987766843389042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5787c60ce7e171df80e78196dc5b8f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82364473179469ac74f8e047e8ae81e3af69d89fba5b26c969befa42e4933d5,PodSandboxId:79952744037ab04ab9ec610c549d1fe95dad8d65b4560d54537ce60c0718f773,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,
State:CONTAINER_RUNNING,CreatedAt:1737987766566078340,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ba205bcbaa29776e909c62b24a71b0,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52e2ad3382187b41e54263d72ba6dab790425fdb473a31cb2ecded83e9dc2f7,PodSandboxId:d793b58819561a0fa29dbf49acc0bee94637f80d36f883a528c915b77c80bf9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,
CreatedAt:1737987766538550016,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c422b2dd8f3b2fd158b86d54699f7a17,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc44fda3c14169ac204404e7003d53f7bd7782f90e33465e1630238f20b39826,PodSandboxId:2873370561f95655e7421a19912654d46f41e4a5fa328c1ff2f18a9510ffef4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNN
ING,CreatedAt:1737987766547779949,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b75a45a5acc0196c4b6709b05ee255d,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2f4f2849cce35fc41f756c6da674d8621635d2ff72754445d0651b349cd217,PodSandboxId:3a721b6e824c280553e8625bd73c50ff912566f4dff5007da9a973ca13965652,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737987
726930717881,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-clss9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6032c71-a225-4356-a049-706a54647858,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2294b1739725ad9aaed8938344fee7686c22d5e688d1c033dc314c65e2796ee7,PodSandboxId:dfb43d8b52480b6eb41c8e14de42c2d60c925c84051236068d60a5ddc601341b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867
d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737987726554040675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8134de1b-9f43-48b2-8405-d2306418e7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ea23fa5210bf04e54af9d9d70ee2db01c60d9b63ceef518f550c79358139c4,PodSandboxId:1b922d323340c2ae0af3fb932ae80735c7b393b731c39f27e4c6d31fbe23a4e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737987726519426974,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9lpvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d25c7a8-9825-48c9-aff4-4fb02fc71c7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edf77131a4418be5a48102c48f46cb91e7bc7d3081be0b382ca56fe73bc8ef5,PodSandboxId:93ff3e8cb8c68ff8d1544c4cfef99cccd42e58685ac5ea87d00a3dedb4a80cde,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737987722778400273,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ba205bcbaa29776e909c62b24a71b0,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc6e9a63f79fd3daaa4d44755466539f94dfc050b0db35c1a36fe4f00d97d8e8,PodSandboxId:7b0978f2754daf3f1253c9337cc9c54822f732d9e2b22ce505dd72f9cd985295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737987722742566957,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c422b2dd8f3b2fd158b86d54699f7a17,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dded0a0d154ae5b44fd7d73dcb5d3850e90162a1058d78c33d04cd9b6b92e8d,PodSandboxId:7f55af87ce837f53754cb3a676e67ac0a71d72a879155c8c95dfa6fab05d5dc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737987722742610386,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b75a45a5acc0196c4b6709b05ee255d,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d405929a-3d41-49ca-9a98-1ccbb91731e1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.906715169Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d811a3d5-236e-4abc-a09b-7fff28066e14 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.906784728Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d811a3d5-236e-4abc-a09b-7fff28066e14 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.908081422Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9dab25e1-ac7e-450d-afef-fe26b7ad9867 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.908774463Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987984908752860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9dab25e1-ac7e-450d-afef-fe26b7ad9867 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.909352295Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7fcdbea3-7457-4f53-8825-9db9f62d388f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.909426438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7fcdbea3-7457-4f53-8825-9db9f62d388f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.909816612Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ee37b4b3e1206b31c449040e80b73ec786c7575b837921bf1242cab7cb60b8b,PodSandboxId:664ccc8e4bf91776b49e3cc415822dd47cd971051cfc4e4db26eb67504cbd0af,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1737987904752678294,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-pbcgz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 31713492-60ad-4c3e-90c3-be2da436ea5c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e955bb9507d0eac7190c0da297cbc04480c7200392e8caeb8d1626977013ba,PodSandboxId:3b0bf1ef5fb97549c9fab9852517c20547cfb24132dd552ae9180ef9f5b3d4f5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737987902486883373,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-xhp74,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 704924aa-bf67-4686-8c97-b3118c26e0f0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd4e409d6eefdec47b888058c980cf8fb0cc0aa26f8edc81c5770ee098c8ac4,PodSandboxId:6d084a4e0a7562819fb18cd95399fa2293b8bccae340313ec95ca1ddc2244e26,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1737987836167444039,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28522223-1c9a-40dc-bf05-aba4db084b30,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2551b6c588eed8a3b8b23c06fc2ec4706de77d7c583f49c533e52091c1c49fa1,PodSandboxId:eb3f91549bacca05f1108f082b0674e027799122c3778424bc020587ae0fb1ab,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737987801555935295,Labels:map[string]string{io.kubernetes.container.name: echoserver,
io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-krgns,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 545ea35f-d082-4ec6-9395-474eb765ae58,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:088a91a7df0414c1a3a02771f28933f64953d9c2d6e9684858b3f0c40b33ab2a,PodSandboxId:ba0939bae8872239a4ff9833a4e95f955858d282a61a18ccfb830c26fb6c83d4,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737987801162275372,Labels:map[string]string{io.kubernetes.container.n
ame: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-bfvhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c90a774f-43ce-446b-a2ce-7cbbca5e3a7a,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d939964a23b588fc22557a3233a68b212e578d52c63d80d2e38c98cab05b3f6,PodSandboxId:5977cb792beeff187994617a69abc26c5d7c262af18a86619f94de40809e2efc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737987770235192340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernet
es.pod.name: kube-proxy-9lpvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d25c7a8-9825-48c9-aff4-4fb02fc71c7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20104e62b9307e210fd6e9aa7a97f74173dbd8adc8afe89de39953becacfe1c,PodSandboxId:06b99cc53ca9455b69e4ac729b8b5fd2f97000125e752415c050ac9eee2b4049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737987770225690694,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: st
orage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8134de1b-9f43-48b2-8405-d2306418e7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f24c678c7f9d9c39421a32f8418b4445c7fc92673ed181f5405eee86803e23b,PodSandboxId:605be2b4fca27d8f288ec1936c77ebcae88139fec3db8f4b03e12e58636bb746,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737987770257383331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-clss9,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: e6032c71-a225-4356-a049-706a54647858,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1be836193e750992cc427af8877b6210c3a50f284fa67527699cfb60e059a2b,PodSandboxId:ee8ad53c1b54aaad50b42fd57fbbc2746dc99d0b2c0a05df90ca4db7be9f4f96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe555639
4553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737987766843389042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5787c60ce7e171df80e78196dc5b8f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82364473179469ac74f8e047e8ae81e3af69d89fba5b26c969befa42e4933d5,PodSandboxId:79952744037ab04ab9ec610c549d1fe95dad8d65b4560d54537ce60c0718f773,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,
State:CONTAINER_RUNNING,CreatedAt:1737987766566078340,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ba205bcbaa29776e909c62b24a71b0,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52e2ad3382187b41e54263d72ba6dab790425fdb473a31cb2ecded83e9dc2f7,PodSandboxId:d793b58819561a0fa29dbf49acc0bee94637f80d36f883a528c915b77c80bf9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,
CreatedAt:1737987766538550016,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c422b2dd8f3b2fd158b86d54699f7a17,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc44fda3c14169ac204404e7003d53f7bd7782f90e33465e1630238f20b39826,PodSandboxId:2873370561f95655e7421a19912654d46f41e4a5fa328c1ff2f18a9510ffef4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNN
ING,CreatedAt:1737987766547779949,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b75a45a5acc0196c4b6709b05ee255d,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2f4f2849cce35fc41f756c6da674d8621635d2ff72754445d0651b349cd217,PodSandboxId:3a721b6e824c280553e8625bd73c50ff912566f4dff5007da9a973ca13965652,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737987
726930717881,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-clss9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6032c71-a225-4356-a049-706a54647858,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2294b1739725ad9aaed8938344fee7686c22d5e688d1c033dc314c65e2796ee7,PodSandboxId:dfb43d8b52480b6eb41c8e14de42c2d60c925c84051236068d60a5ddc601341b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867
d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737987726554040675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8134de1b-9f43-48b2-8405-d2306418e7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ea23fa5210bf04e54af9d9d70ee2db01c60d9b63ceef518f550c79358139c4,PodSandboxId:1b922d323340c2ae0af3fb932ae80735c7b393b731c39f27e4c6d31fbe23a4e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737987726519426974,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9lpvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d25c7a8-9825-48c9-aff4-4fb02fc71c7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edf77131a4418be5a48102c48f46cb91e7bc7d3081be0b382ca56fe73bc8ef5,PodSandboxId:93ff3e8cb8c68ff8d1544c4cfef99cccd42e58685ac5ea87d00a3dedb4a80cde,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737987722778400273,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ba205bcbaa29776e909c62b24a71b0,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc6e9a63f79fd3daaa4d44755466539f94dfc050b0db35c1a36fe4f00d97d8e8,PodSandboxId:7b0978f2754daf3f1253c9337cc9c54822f732d9e2b22ce505dd72f9cd985295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737987722742566957,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c422b2dd8f3b2fd158b86d54699f7a17,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dded0a0d154ae5b44fd7d73dcb5d3850e90162a1058d78c33d04cd9b6b92e8d,PodSandboxId:7f55af87ce837f53754cb3a676e67ac0a71d72a879155c8c95dfa6fab05d5dc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737987722742610386,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b75a45a5acc0196c4b6709b05ee255d,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7fcdbea3-7457-4f53-8825-9db9f62d388f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.950617897Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ea4045c-f72b-4253-8548-4ecdb5c79584 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.950696022Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ea4045c-f72b-4253-8548-4ecdb5c79584 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.952835123Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e086eac-f1f0-47fe-b7d0-a083181ba310 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.953524780Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987984953503618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e086eac-f1f0-47fe-b7d0-a083181ba310 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.954095280Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6917a859-4005-4361-a3bc-2dd85a15464c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.954238686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6917a859-4005-4361-a3bc-2dd85a15464c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:26:24 functional-354053 crio[4480]: time="2025-01-27 14:26:24.954557610Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ee37b4b3e1206b31c449040e80b73ec786c7575b837921bf1242cab7cb60b8b,PodSandboxId:664ccc8e4bf91776b49e3cc415822dd47cd971051cfc4e4db26eb67504cbd0af,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1737987904752678294,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-pbcgz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 31713492-60ad-4c3e-90c3-be2da436ea5c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e955bb9507d0eac7190c0da297cbc04480c7200392e8caeb8d1626977013ba,PodSandboxId:3b0bf1ef5fb97549c9fab9852517c20547cfb24132dd552ae9180ef9f5b3d4f5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737987902486883373,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-xhp74,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 704924aa-bf67-4686-8c97-b3118c26e0f0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd4e409d6eefdec47b888058c980cf8fb0cc0aa26f8edc81c5770ee098c8ac4,PodSandboxId:6d084a4e0a7562819fb18cd95399fa2293b8bccae340313ec95ca1ddc2244e26,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1737987836167444039,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28522223-1c9a-40dc-bf05-aba4db084b30,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2551b6c588eed8a3b8b23c06fc2ec4706de77d7c583f49c533e52091c1c49fa1,PodSandboxId:eb3f91549bacca05f1108f082b0674e027799122c3778424bc020587ae0fb1ab,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737987801555935295,Labels:map[string]string{io.kubernetes.container.name: echoserver,
io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-krgns,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 545ea35f-d082-4ec6-9395-474eb765ae58,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:088a91a7df0414c1a3a02771f28933f64953d9c2d6e9684858b3f0c40b33ab2a,PodSandboxId:ba0939bae8872239a4ff9833a4e95f955858d282a61a18ccfb830c26fb6c83d4,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737987801162275372,Labels:map[string]string{io.kubernetes.container.n
ame: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-bfvhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c90a774f-43ce-446b-a2ce-7cbbca5e3a7a,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d939964a23b588fc22557a3233a68b212e578d52c63d80d2e38c98cab05b3f6,PodSandboxId:5977cb792beeff187994617a69abc26c5d7c262af18a86619f94de40809e2efc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737987770235192340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernet
es.pod.name: kube-proxy-9lpvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d25c7a8-9825-48c9-aff4-4fb02fc71c7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20104e62b9307e210fd6e9aa7a97f74173dbd8adc8afe89de39953becacfe1c,PodSandboxId:06b99cc53ca9455b69e4ac729b8b5fd2f97000125e752415c050ac9eee2b4049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737987770225690694,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: st
orage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8134de1b-9f43-48b2-8405-d2306418e7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f24c678c7f9d9c39421a32f8418b4445c7fc92673ed181f5405eee86803e23b,PodSandboxId:605be2b4fca27d8f288ec1936c77ebcae88139fec3db8f4b03e12e58636bb746,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737987770257383331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-clss9,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: e6032c71-a225-4356-a049-706a54647858,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1be836193e750992cc427af8877b6210c3a50f284fa67527699cfb60e059a2b,PodSandboxId:ee8ad53c1b54aaad50b42fd57fbbc2746dc99d0b2c0a05df90ca4db7be9f4f96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe555639
4553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737987766843389042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5787c60ce7e171df80e78196dc5b8f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82364473179469ac74f8e047e8ae81e3af69d89fba5b26c969befa42e4933d5,PodSandboxId:79952744037ab04ab9ec610c549d1fe95dad8d65b4560d54537ce60c0718f773,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,
State:CONTAINER_RUNNING,CreatedAt:1737987766566078340,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ba205bcbaa29776e909c62b24a71b0,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52e2ad3382187b41e54263d72ba6dab790425fdb473a31cb2ecded83e9dc2f7,PodSandboxId:d793b58819561a0fa29dbf49acc0bee94637f80d36f883a528c915b77c80bf9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,
CreatedAt:1737987766538550016,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c422b2dd8f3b2fd158b86d54699f7a17,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc44fda3c14169ac204404e7003d53f7bd7782f90e33465e1630238f20b39826,PodSandboxId:2873370561f95655e7421a19912654d46f41e4a5fa328c1ff2f18a9510ffef4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNN
ING,CreatedAt:1737987766547779949,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b75a45a5acc0196c4b6709b05ee255d,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2f4f2849cce35fc41f756c6da674d8621635d2ff72754445d0651b349cd217,PodSandboxId:3a721b6e824c280553e8625bd73c50ff912566f4dff5007da9a973ca13965652,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737987
726930717881,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-clss9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6032c71-a225-4356-a049-706a54647858,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2294b1739725ad9aaed8938344fee7686c22d5e688d1c033dc314c65e2796ee7,PodSandboxId:dfb43d8b52480b6eb41c8e14de42c2d60c925c84051236068d60a5ddc601341b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867
d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737987726554040675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8134de1b-9f43-48b2-8405-d2306418e7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ea23fa5210bf04e54af9d9d70ee2db01c60d9b63ceef518f550c79358139c4,PodSandboxId:1b922d323340c2ae0af3fb932ae80735c7b393b731c39f27e4c6d31fbe23a4e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737987726519426974,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9lpvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d25c7a8-9825-48c9-aff4-4fb02fc71c7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edf77131a4418be5a48102c48f46cb91e7bc7d3081be0b382ca56fe73bc8ef5,PodSandboxId:93ff3e8cb8c68ff8d1544c4cfef99cccd42e58685ac5ea87d00a3dedb4a80cde,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737987722778400273,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ba205bcbaa29776e909c62b24a71b0,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc6e9a63f79fd3daaa4d44755466539f94dfc050b0db35c1a36fe4f00d97d8e8,PodSandboxId:7b0978f2754daf3f1253c9337cc9c54822f732d9e2b22ce505dd72f9cd985295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737987722742566957,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c422b2dd8f3b2fd158b86d54699f7a17,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dded0a0d154ae5b44fd7d73dcb5d3850e90162a1058d78c33d04cd9b6b92e8d,PodSandboxId:7f55af87ce837f53754cb3a676e67ac0a71d72a879155c8c95dfa6fab05d5dc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737987722742610386,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b75a45a5acc0196c4b6709b05ee255d,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6917a859-4005-4361-a3bc-2dd85a15464c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	5ee37b4b3e120       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   About a minute ago   Running             dashboard-metrics-scraper   0                   664ccc8e4bf91       dashboard-metrics-scraper-5d59dccf9b-pbcgz
	c0e955bb9507d       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         About a minute ago   Running             kubernetes-dashboard        0                   3b0bf1ef5fb97       kubernetes-dashboard-7779f9b69b-xhp74
	ebd4e409d6eef       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              2 minutes ago        Exited              mount-munger                0                   6d084a4e0a756       busybox-mount
	2551b6c588eed       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 3 minutes ago        Running             echoserver                  0                   eb3f91549bacc       hello-node-connect-58f9cf68d8-krgns
	088a91a7df041       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               3 minutes ago        Running             echoserver                  0                   ba0939bae8872       hello-node-fcfd88b6f-bfvhm
	2f24c678c7f9d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 3 minutes ago        Running             coredns                     2                   605be2b4fca27       coredns-668d6bf9bc-clss9
	4d939964a23b5       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                 3 minutes ago        Running             kube-proxy                  2                   5977cb792beef       kube-proxy-9lpvn
	c20104e62b930       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago        Running             storage-provisioner         2                   06b99cc53ca94       storage-provisioner
	a1be836193e75       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                 3 minutes ago        Running             kube-apiserver              0                   ee8ad53c1b54a       kube-apiserver-functional-354053
	a823644731794       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 3 minutes ago        Running             etcd                        2                   79952744037ab       etcd-functional-354053
	cc44fda3c1416       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                 3 minutes ago        Running             kube-scheduler              2                   2873370561f95       kube-scheduler-functional-354053
	d52e2ad338218       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                 3 minutes ago        Running             kube-controller-manager     2                   d793b58819561       kube-controller-manager-functional-354053
	1c2f4f2849cce       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 4 minutes ago        Exited              coredns                     1                   3a721b6e824c2       coredns-668d6bf9bc-clss9
	2294b1739725a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 4 minutes ago        Exited              storage-provisioner         1                   dfb43d8b52480       storage-provisioner
	49ea23fa5210b       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                 4 minutes ago        Exited              kube-proxy                  1                   1b922d323340c       kube-proxy-9lpvn
	9edf77131a441       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 4 minutes ago        Exited              etcd                        1                   93ff3e8cb8c68       etcd-functional-354053
	9dded0a0d154a       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                 4 minutes ago        Exited              kube-scheduler              1                   7f55af87ce837       kube-scheduler-functional-354053
	cc6e9a63f79fd       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                 4 minutes ago        Exited              kube-controller-manager     1                   7b0978f2754da       kube-controller-manager-functional-354053
	
	
	==> coredns [1c2f4f2849cce35fc41f756c6da674d8621635d2ff72754445d0651b349cd217] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53534 - 46475 "HINFO IN 2217809593452091882.4535506307984023863. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.05471968s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [2f24c678c7f9d9c39421a32f8418b4445c7fc92673ed181f5405eee86803e23b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58446 - 37101 "HINFO IN 4080665393069892780.2579021628035971684. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034193208s
	
	
	==> describe nodes <==
	Name:               functional-354053
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-354053
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743
	                    minikube.k8s.io/name=functional-354053
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T14_21_29_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 14:21:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-354053
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 14:26:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 14:25:21 +0000   Mon, 27 Jan 2025 14:21:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 14:25:21 +0000   Mon, 27 Jan 2025 14:21:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 14:25:21 +0000   Mon, 27 Jan 2025 14:21:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 14:25:21 +0000   Mon, 27 Jan 2025 14:21:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.247
	  Hostname:    functional-354053
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 e12d017be01f45b180b4abb500a508a3
	  System UUID:                e12d017b-e01f-45b1-80b4-abb500a508a3
	  Boot ID:                    5b694413-5e72-428f-b29a-a2aa7c86698b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-krgns           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  default                     hello-node-fcfd88b6f-bfvhm                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	  default                     mysql-58ccfd96bb-g4cbx                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    2m56s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 coredns-668d6bf9bc-clss9                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m52s
	  kube-system                 etcd-functional-354053                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m56s
	  kube-system                 kube-apiserver-functional-354053              250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m36s
	  kube-system                 kube-controller-manager-functional-354053     200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-proxy-9lpvn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 kube-scheduler-functional-354053              100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-pbcgz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-xhp74         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m50s                  kube-proxy       
	  Normal  Starting                 3m34s                  kube-proxy       
	  Normal  Starting                 4m18s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m56s                  kubelet          Node functional-354053 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m56s                  kubelet          Node functional-354053 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s                  kubelet          Node functional-354053 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m56s                  kubelet          Starting kubelet.
	  Normal  NodeReady                4m55s                  kubelet          Node functional-354053 status is now: NodeReady
	  Normal  RegisteredNode           4m53s                  node-controller  Node functional-354053 event: Registered Node functional-354053 in Controller
	  Normal  NodeHasNoDiskPressure    4m23s (x8 over 4m23s)  kubelet          Node functional-354053 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m23s (x8 over 4m23s)  kubelet          Node functional-354053 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     4m23s (x7 over 4m23s)  kubelet          Node functional-354053 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m17s                  node-controller  Node functional-354053 event: Registered Node functional-354053 in Controller
	  Normal  Starting                 3m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m39s (x9 over 3m39s)  kubelet          Node functional-354053 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m39s (x7 over 3m39s)  kubelet          Node functional-354053 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m39s (x7 over 3m39s)  kubelet          Node functional-354053 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m33s                  node-controller  Node functional-354053 event: Registered Node functional-354053 in Controller
	
	
	==> dmesg <==
	[  +0.141986] systemd-fstab-generator[2471]: Ignoring "noauto" option for root device
	[  +0.278030] systemd-fstab-generator[2499]: Ignoring "noauto" option for root device
	[  +7.064631] systemd-fstab-generator[2626]: Ignoring "noauto" option for root device
	[  +0.074400] kauditd_printk_skb: 100 callbacks suppressed
	[Jan27 14:22] systemd-fstab-generator[2750]: Ignoring "noauto" option for root device
	[  +4.566235] kauditd_printk_skb: 74 callbacks suppressed
	[ +16.702778] systemd-fstab-generator[3555]: Ignoring "noauto" option for root device
	[  +0.091214] kauditd_printk_skb: 37 callbacks suppressed
	[ +18.027041] systemd-fstab-generator[4405]: Ignoring "noauto" option for root device
	[  +0.073098] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.052071] systemd-fstab-generator[4417]: Ignoring "noauto" option for root device
	[  +0.198233] systemd-fstab-generator[4431]: Ignoring "noauto" option for root device
	[  +0.125651] systemd-fstab-generator[4443]: Ignoring "noauto" option for root device
	[  +0.284373] systemd-fstab-generator[4471]: Ignoring "noauto" option for root device
	[  +0.782621] systemd-fstab-generator[4592]: Ignoring "noauto" option for root device
	[  +2.936487] systemd-fstab-generator[5093]: Ignoring "noauto" option for root device
	[  +0.700008] kauditd_printk_skb: 200 callbacks suppressed
	[  +6.473568] kauditd_printk_skb: 41 callbacks suppressed
	[Jan27 14:23] systemd-fstab-generator[5679]: Ignoring "noauto" option for root device
	[  +6.499783] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.468589] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.038353] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.846188] kauditd_printk_skb: 2 callbacks suppressed
	[ +26.327652] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 14:24] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [9edf77131a4418be5a48102c48f46cb91e7bc7d3081be0b382ca56fe73bc8ef5] <==
	{"level":"info","ts":"2025-01-27T14:22:04.459254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-27T14:22:04.459270Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 received MsgPreVoteResp from b60ca5935c0b4769 at term 2"}
	{"level":"info","ts":"2025-01-27T14:22:04.459295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 became candidate at term 3"}
	{"level":"info","ts":"2025-01-27T14:22:04.459302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 received MsgVoteResp from b60ca5935c0b4769 at term 3"}
	{"level":"info","ts":"2025-01-27T14:22:04.459310Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 became leader at term 3"}
	{"level":"info","ts":"2025-01-27T14:22:04.459316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b60ca5935c0b4769 elected leader b60ca5935c0b4769 at term 3"}
	{"level":"info","ts":"2025-01-27T14:22:04.464738Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"b60ca5935c0b4769","local-member-attributes":"{Name:functional-354053 ClientURLs:[https://192.168.39.247:2379]}","request-path":"/0/members/b60ca5935c0b4769/attributes","cluster-id":"7fda2fc0436a8884","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T14:22:04.464862Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T14:22:04.465621Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T14:22:04.466355Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.247:2379"}
	{"level":"info","ts":"2025-01-27T14:22:04.466610Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T14:22:04.467022Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T14:22:04.467572Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T14:22:04.470220Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T14:22:04.470253Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T14:22:35.047774Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-01-27T14:22:35.047833Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-354053","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.247:2380"],"advertise-client-urls":["https://192.168.39.247:2379"]}
	{"level":"warn","ts":"2025-01-27T14:22:35.047950Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.247:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-01-27T14:22:35.047981Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.247:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-01-27T14:22:35.048089Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-01-27T14:22:35.048210Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-01-27T14:22:35.095494Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b60ca5935c0b4769","current-leader-member-id":"b60ca5935c0b4769"}
	{"level":"info","ts":"2025-01-27T14:22:35.099462Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.39.247:2380"}
	{"level":"info","ts":"2025-01-27T14:22:35.099573Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.39.247:2380"}
	{"level":"info","ts":"2025-01-27T14:22:35.099682Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-354053","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.247:2380"],"advertise-client-urls":["https://192.168.39.247:2379"]}
	
	
	==> etcd [a82364473179469ac74f8e047e8ae81e3af69d89fba5b26c969befa42e4933d5] <==
	{"level":"info","ts":"2025-01-27T14:22:48.048707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 became leader at term 4"}
	{"level":"info","ts":"2025-01-27T14:22:48.048725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b60ca5935c0b4769 elected leader b60ca5935c0b4769 at term 4"}
	{"level":"info","ts":"2025-01-27T14:22:48.055477Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"b60ca5935c0b4769","local-member-attributes":"{Name:functional-354053 ClientURLs:[https://192.168.39.247:2379]}","request-path":"/0/members/b60ca5935c0b4769/attributes","cluster-id":"7fda2fc0436a8884","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T14:22:48.055495Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T14:22:48.055701Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T14:22:48.055731Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T14:22:48.055527Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T14:22:48.056480Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T14:22:48.057034Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.247:2379"}
	{"level":"info","ts":"2025-01-27T14:22:48.058442Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T14:22:48.059369Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T14:23:29.815193Z","caller":"traceutil/trace.go:171","msg":"trace[539507025] linearizableReadLoop","detail":"{readStateIndex:774; appliedIndex:773; }","duration":"243.147493ms","start":"2025-01-27T14:23:29.571950Z","end":"2025-01-27T14:23:29.815097Z","steps":["trace[539507025] 'read index received'  (duration: 243.032343ms)","trace[539507025] 'applied index is now lower than readState.Index'  (duration: 113.241µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T14:23:29.815509Z","caller":"traceutil/trace.go:171","msg":"trace[671407638] transaction","detail":"{read_only:false; response_revision:703; number_of_response:1; }","duration":"252.938183ms","start":"2025-01-27T14:23:29.562561Z","end":"2025-01-27T14:23:29.815499Z","steps":["trace[671407638] 'process raft request'  (duration: 252.458232ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:23:29.815911Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.899123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:23:29.819035Z","caller":"traceutil/trace.go:171","msg":"trace[1766627855] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:703; }","duration":"247.097504ms","start":"2025-01-27T14:23:29.571924Z","end":"2025-01-27T14:23:29.819021Z","steps":["trace[1766627855] 'agreement among raft nodes before linearized reading'  (duration: 243.837488ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:23:29.816036Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.69343ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" limit:1 ","response":"range_response_count:1 size:63988"}
	{"level":"info","ts":"2025-01-27T14:23:29.820867Z","caller":"traceutil/trace.go:171","msg":"trace[1751903035] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:703; }","duration":"245.538066ms","start":"2025-01-27T14:23:29.575313Z","end":"2025-01-27T14:23:29.820851Z","steps":["trace[1751903035] 'agreement among raft nodes before linearized reading'  (duration: 240.64397ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:23:29.816065Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.566306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-01-27T14:23:29.816080Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.815078ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:23:29.821948Z","caller":"traceutil/trace.go:171","msg":"trace[996813919] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:703; }","duration":"129.694172ms","start":"2025-01-27T14:23:29.692244Z","end":"2025-01-27T14:23:29.821938Z","steps":["trace[996813919] 'agreement among raft nodes before linearized reading'  (duration: 123.827309ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:23:29.822285Z","caller":"traceutil/trace.go:171","msg":"trace[350038628] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:703; }","duration":"113.799057ms","start":"2025-01-27T14:23:29.708477Z","end":"2025-01-27T14:23:29.822276Z","steps":["trace[350038628] 'agreement among raft nodes before linearized reading'  (duration: 107.578391ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:25:02.307591Z","caller":"traceutil/trace.go:171","msg":"trace[1169481583] transaction","detail":"{read_only:false; response_revision:882; number_of_response:1; }","duration":"268.007777ms","start":"2025-01-27T14:25:02.039547Z","end":"2025-01-27T14:25:02.307555Z","steps":["trace[1169481583] 'process raft request'  (duration: 267.865918ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:25:02.308257Z","caller":"traceutil/trace.go:171","msg":"trace[1517727960] linearizableReadLoop","detail":"{readStateIndex:974; appliedIndex:974; }","duration":"179.892398ms","start":"2025-01-27T14:25:02.128354Z","end":"2025-01-27T14:25:02.308246Z","steps":["trace[1517727960] 'read index received'  (duration: 179.889338ms)","trace[1517727960] 'applied index is now lower than readState.Index'  (duration: 2.416µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T14:25:02.308440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.069777ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:25:02.308511Z","caller":"traceutil/trace.go:171","msg":"trace[663188896] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:882; }","duration":"180.156228ms","start":"2025-01-27T14:25:02.128347Z","end":"2025-01-27T14:25:02.308504Z","steps":["trace[663188896] 'agreement among raft nodes before linearized reading'  (duration: 180.059382ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:26:25 up 5 min,  0 users,  load average: 0.22, 0.43, 0.22
	Linux functional-354053 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a1be836193e750992cc427af8877b6210c3a50f284fa67527699cfb60e059a2b] <==
	I0127 14:22:49.227504       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0127 14:22:49.229251       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 14:22:49.229290       1 policy_source.go:240] refreshing policies
	I0127 14:22:49.229360       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 14:22:49.229400       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0127 14:22:49.239053       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0127 14:22:49.250856       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0127 14:22:49.273703       1 cache.go:39] Caches are synced for autoregister controller
	I0127 14:22:49.279319       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 14:22:49.982334       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 14:22:50.122861       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 14:22:50.734045       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 14:22:50.771341       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 14:22:50.794054       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 14:22:50.800448       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 14:22:52.590825       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0127 14:22:52.788530       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 14:22:52.838792       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 14:23:12.506363       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.46.57"}
	I0127 14:23:16.924655       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.58.162"}
	I0127 14:23:19.972563       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.203.208"}
	I0127 14:23:29.845002       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.18.202"}
	I0127 14:23:30.932893       1 controller.go:615] quota admission added evaluator for: namespaces
	I0127 14:23:31.222390       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.148.87"}
	I0127 14:23:31.264075       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.94.200"}
	
	
	==> kube-controller-manager [cc6e9a63f79fd3daaa4d44755466539f94dfc050b0db35c1a36fe4f00d97d8e8] <==
	I0127 14:22:08.913762       1 shared_informer.go:320] Caches are synced for disruption
	I0127 14:22:08.923391       1 shared_informer.go:320] Caches are synced for attach detach
	I0127 14:22:08.923835       1 shared_informer.go:320] Caches are synced for persistent volume
	I0127 14:22:08.924017       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0127 14:22:08.924805       1 shared_informer.go:320] Caches are synced for cronjob
	I0127 14:22:08.924881       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 14:22:08.925237       1 shared_informer.go:320] Caches are synced for PVC protection
	I0127 14:22:08.925312       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0127 14:22:08.925354       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0127 14:22:08.925988       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0127 14:22:08.927195       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 14:22:08.927255       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0127 14:22:08.929309       1 shared_informer.go:320] Caches are synced for TTL
	I0127 14:22:08.931425       1 shared_informer.go:320] Caches are synced for deployment
	I0127 14:22:08.932364       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 14:22:08.933502       1 shared_informer.go:320] Caches are synced for service account
	I0127 14:22:08.935777       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 14:22:08.945079       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0127 14:22:08.945611       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 14:22:08.945762       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 14:22:08.945867       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 14:22:08.949388       1 shared_informer.go:320] Caches are synced for daemon sets
	I0127 14:22:08.954812       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 14:22:08.954913       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 14:22:08.960320       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	
	
	==> kube-controller-manager [d52e2ad3382187b41e54263d72ba6dab790425fdb473a31cb2ecded83e9dc2f7] <==
	I0127 14:23:31.083319       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="8.604804ms"
	E0127 14:23:31.083362       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0127 14:23:31.083405       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="13.486232ms"
	E0127 14:23:31.083414       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0127 14:23:31.090386       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="5.628606ms"
	E0127 14:23:31.090430       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0127 14:23:31.090472       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="5.895722ms"
	E0127 14:23:31.090480       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0127 14:23:31.133105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="27.890584ms"
	I0127 14:23:31.152439       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="19.151386ms"
	I0127 14:23:31.165242       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="32.445414ms"
	I0127 14:23:31.201431       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="48.844221ms"
	I0127 14:23:31.201528       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="56.914µs"
	I0127 14:23:31.206515       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="41.2227ms"
	I0127 14:23:31.206582       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="27.855µs"
	I0127 14:23:50.207281       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-354053"
	I0127 14:24:20.873467       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-354053"
	I0127 14:24:57.847437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="102.881µs"
	I0127 14:25:02.899338       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="11.99443ms"
	I0127 14:25:02.900450       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="103.661µs"
	I0127 14:25:04.917759       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="12.769876ms"
	I0127 14:25:04.919726       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="29.421µs"
	I0127 14:25:12.936963       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="55.467µs"
	I0127 14:25:21.986933       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-354053"
	I0127 14:26:18.939601       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="148.123µs"
	
	
	==> kube-proxy [49ea23fa5210bf04e54af9d9d70ee2db01c60d9b63ceef518f550c79358139c4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 14:22:06.863740       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 14:22:06.877598       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.247"]
	E0127 14:22:06.877698       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 14:22:06.986302       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 14:22:06.986349       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 14:22:06.986372       1 server_linux.go:170] "Using iptables Proxier"
	I0127 14:22:06.991277       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 14:22:06.991562       1 server.go:497] "Version info" version="v1.32.1"
	I0127 14:22:06.991590       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:22:06.995696       1 config.go:199] "Starting service config controller"
	I0127 14:22:06.995761       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 14:22:06.995817       1 config.go:105] "Starting endpoint slice config controller"
	I0127 14:22:06.995834       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 14:22:06.996398       1 config.go:329] "Starting node config controller"
	I0127 14:22:06.996439       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 14:22:07.096222       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 14:22:07.096333       1 shared_informer.go:320] Caches are synced for service config
	I0127 14:22:07.096524       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [4d939964a23b588fc22557a3233a68b212e578d52c63d80d2e38c98cab05b3f6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 14:22:50.724557       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 14:22:50.741640       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.247"]
	E0127 14:22:50.741721       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 14:22:50.806387       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 14:22:50.806447       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 14:22:50.806469       1 server_linux.go:170] "Using iptables Proxier"
	I0127 14:22:50.809047       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 14:22:50.809320       1 server.go:497] "Version info" version="v1.32.1"
	I0127 14:22:50.809352       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:22:50.812751       1 config.go:199] "Starting service config controller"
	I0127 14:22:50.812801       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 14:22:50.812823       1 config.go:105] "Starting endpoint slice config controller"
	I0127 14:22:50.812827       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 14:22:50.813763       1 config.go:329] "Starting node config controller"
	I0127 14:22:50.813791       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 14:22:50.913373       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 14:22:50.913405       1 shared_informer.go:320] Caches are synced for service config
	I0127 14:22:50.914010       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9dded0a0d154ae5b44fd7d73dcb5d3850e90162a1058d78c33d04cd9b6b92e8d] <==
	I0127 14:22:04.069894       1 serving.go:386] Generated self-signed cert in-memory
	W0127 14:22:05.664682       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0127 14:22:05.664777       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 14:22:05.664800       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 14:22:05.664817       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 14:22:05.718736       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 14:22:05.720427       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:22:05.722595       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 14:22:05.722659       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 14:22:05.723275       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 14:22:05.722672       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 14:22:05.823957       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0127 14:22:35.069651       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cc44fda3c14169ac204404e7003d53f7bd7782f90e33465e1630238f20b39826] <==
	I0127 14:22:47.725403       1 serving.go:386] Generated self-signed cert in-memory
	W0127 14:22:49.192538       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0127 14:22:49.192647       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 14:22:49.192673       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 14:22:49.192686       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 14:22:49.227406       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 14:22:49.227441       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:22:49.230016       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 14:22:49.230421       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 14:22:49.231204       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 14:22:49.231257       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 14:22:49.331825       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 14:25:45 functional-354053 kubelet[5100]: E0127 14:25:45.957174    5100 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 14:25:45 functional-354053 kubelet[5100]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 14:25:45 functional-354053 kubelet[5100]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 14:25:45 functional-354053 kubelet[5100]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 14:25:45 functional-354053 kubelet[5100]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 14:25:46 functional-354053 kubelet[5100]: E0127 14:25:46.045447    5100 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd8ba205bcbaa29776e909c62b24a71b0/crio-93ff3e8cb8c68ff8d1544c4cfef99cccd42e58685ac5ea87d00a3dedb4a80cde: Error finding container 93ff3e8cb8c68ff8d1544c4cfef99cccd42e58685ac5ea87d00a3dedb4a80cde: Status 404 returned error can't find the container with id 93ff3e8cb8c68ff8d1544c4cfef99cccd42e58685ac5ea87d00a3dedb4a80cde
	Jan 27 14:25:46 functional-354053 kubelet[5100]: E0127 14:25:46.045661    5100 manager.go:1116] Failed to create existing container: /kubepods/burstable/podc422b2dd8f3b2fd158b86d54699f7a17/crio-7b0978f2754daf3f1253c9337cc9c54822f732d9e2b22ce505dd72f9cd985295: Error finding container 7b0978f2754daf3f1253c9337cc9c54822f732d9e2b22ce505dd72f9cd985295: Status 404 returned error can't find the container with id 7b0978f2754daf3f1253c9337cc9c54822f732d9e2b22ce505dd72f9cd985295
	Jan 27 14:25:46 functional-354053 kubelet[5100]: E0127 14:25:46.045988    5100 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod2b75a45a5acc0196c4b6709b05ee255d/crio-7f55af87ce837f53754cb3a676e67ac0a71d72a879155c8c95dfa6fab05d5dc4: Error finding container 7f55af87ce837f53754cb3a676e67ac0a71d72a879155c8c95dfa6fab05d5dc4: Status 404 returned error can't find the container with id 7f55af87ce837f53754cb3a676e67ac0a71d72a879155c8c95dfa6fab05d5dc4
	Jan 27 14:25:46 functional-354053 kubelet[5100]: E0127 14:25:46.046201    5100 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod8134de1b-9f43-48b2-8405-d2306418e7bd/crio-dfb43d8b52480b6eb41c8e14de42c2d60c925c84051236068d60a5ddc601341b: Error finding container dfb43d8b52480b6eb41c8e14de42c2d60c925c84051236068d60a5ddc601341b: Status 404 returned error can't find the container with id dfb43d8b52480b6eb41c8e14de42c2d60c925c84051236068d60a5ddc601341b
	Jan 27 14:25:46 functional-354053 kubelet[5100]: E0127 14:25:46.046501    5100 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod3d25c7a8-9825-48c9-aff4-4fb02fc71c7b/crio-1b922d323340c2ae0af3fb932ae80735c7b393b731c39f27e4c6d31fbe23a4e3: Error finding container 1b922d323340c2ae0af3fb932ae80735c7b393b731c39f27e4c6d31fbe23a4e3: Status 404 returned error can't find the container with id 1b922d323340c2ae0af3fb932ae80735c7b393b731c39f27e4c6d31fbe23a4e3
	Jan 27 14:25:46 functional-354053 kubelet[5100]: E0127 14:25:46.046747    5100 manager.go:1116] Failed to create existing container: /kubepods/burstable/pode6032c71-a225-4356-a049-706a54647858/crio-3a721b6e824c280553e8625bd73c50ff912566f4dff5007da9a973ca13965652: Error finding container 3a721b6e824c280553e8625bd73c50ff912566f4dff5007da9a973ca13965652: Status 404 returned error can't find the container with id 3a721b6e824c280553e8625bd73c50ff912566f4dff5007da9a973ca13965652
	Jan 27 14:25:46 functional-354053 kubelet[5100]: E0127 14:25:46.099343    5100 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987946099055421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:25:46 functional-354053 kubelet[5100]: E0127 14:25:46.099384    5100 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987946099055421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:25:46 functional-354053 kubelet[5100]: E0127 14:25:46.921271    5100 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ebf05447-f691-4ade-96c9-00b397d988e5"
	Jan 27 14:25:56 functional-354053 kubelet[5100]: E0127 14:25:56.101437    5100 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987956101104728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:25:56 functional-354053 kubelet[5100]: E0127 14:25:56.101582    5100 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987956101104728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:26:06 functional-354053 kubelet[5100]: E0127 14:26:06.015076    5100 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Jan 27 14:26:06 functional-354053 kubelet[5100]: E0127 14:26:06.015279    5100 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Jan 27 14:26:06 functional-354053 kubelet[5100]: E0127 14:26:06.015591    5100 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lxf6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-58ccfd96bb-g4cbx_default(19337626-b84b-4816-95ce-09221be5167e): ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Jan 27 14:26:06 functional-354053 kubelet[5100]: E0127 14:26:06.017603    5100 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-g4cbx" podUID="19337626-b84b-4816-95ce-09221be5167e"
	Jan 27 14:26:06 functional-354053 kubelet[5100]: E0127 14:26:06.103250    5100 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987966102903699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:26:06 functional-354053 kubelet[5100]: E0127 14:26:06.103325    5100 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987966102903699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:26:16 functional-354053 kubelet[5100]: E0127 14:26:16.104820    5100 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987976104548670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:26:16 functional-354053 kubelet[5100]: E0127 14:26:16.104872    5100 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737987976104548670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:26:18 functional-354053 kubelet[5100]: E0127 14:26:18.923981    5100 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-g4cbx" podUID="19337626-b84b-4816-95ce-09221be5167e"
	
	
	==> kubernetes-dashboard [c0e955bb9507d0eac7190c0da297cbc04480c7200392e8caeb8d1626977013ba] <==
	2025/01/27 14:25:02 Starting overwatch
	2025/01/27 14:25:02 Using namespace: kubernetes-dashboard
	2025/01/27 14:25:02 Using in-cluster config to connect to apiserver
	2025/01/27 14:25:02 Using secret token for csrf signing
	2025/01/27 14:25:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/01/27 14:25:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/01/27 14:25:02 Successful initial request to the apiserver, version: v1.32.1
	2025/01/27 14:25:02 Generating JWE encryption key
	2025/01/27 14:25:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/01/27 14:25:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/01/27 14:25:02 Initializing JWE encryption key from synchronized object
	2025/01/27 14:25:02 Creating in-cluster Sidecar client
	2025/01/27 14:25:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:25:02 Serving insecurely on HTTP port: 9090
	2025/01/27 14:25:32 Successful request to sidecar
	
	
	==> storage-provisioner [2294b1739725ad9aaed8938344fee7686c22d5e688d1c033dc314c65e2796ee7] <==
	I0127 14:22:06.740062       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 14:22:06.765962       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 14:22:06.766095       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 14:22:24.170947       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 14:22:24.171792       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0f83e328-fa0c-4b8e-993c-da7207b298ef", APIVersion:"v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-354053_8e50e530-54e1-4a19-8081-afa083db3604 became leader
	I0127 14:22:24.173574       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-354053_8e50e530-54e1-4a19-8081-afa083db3604!
	I0127 14:22:24.274777       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-354053_8e50e530-54e1-4a19-8081-afa083db3604!
	
	
	==> storage-provisioner [c20104e62b9307e210fd6e9aa7a97f74173dbd8adc8afe89de39953becacfe1c] <==
	I0127 14:22:50.463444       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 14:22:50.526720       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 14:22:50.529438       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 14:23:07.942180       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 14:23:07.942602       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0f83e328-fa0c-4b8e-993c-da7207b298ef", APIVersion:"v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-354053_b7c745fe-c80e-4afb-be4c-154e4d4ee06c became leader
	I0127 14:23:07.942710       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-354053_b7c745fe-c80e-4afb-be4c-154e4d4ee06c!
	I0127 14:23:08.043308       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-354053_b7c745fe-c80e-4afb-be4c-154e4d4ee06c!
	I0127 14:23:23.498596       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0127 14:23:23.500003       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    d26bf790-8820-421d-a921-caa967471eec 325 0 2025-01-27 14:21:34 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-01-27 14:21:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-60217037-59a2-4af5-8713-657e81cbfb2a &PersistentVolumeClaim{ObjectMeta:{myclaim  default  60217037-59a2-4af5-8713-657e81cbfb2a 686 0 2025-01-27 14:23:23 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-01-27 14:23:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-01-27 14:23:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0127 14:23:23.500645       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-60217037-59a2-4af5-8713-657e81cbfb2a" provisioned
	I0127 14:23:23.500717       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0127 14:23:23.500743       1 volume_store.go:212] Trying to save persistentvolume "pvc-60217037-59a2-4af5-8713-657e81cbfb2a"
	I0127 14:23:23.501924       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"60217037-59a2-4af5-8713-657e81cbfb2a", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0127 14:23:23.524012       1 volume_store.go:219] persistentvolume "pvc-60217037-59a2-4af5-8713-657e81cbfb2a" saved
	I0127 14:23:23.524420       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"60217037-59a2-4af5-8713-657e81cbfb2a", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-60217037-59a2-4af5-8713-657e81cbfb2a
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-354053 -n functional-354053
helpers_test.go:261: (dbg) Run:  kubectl --context functional-354053 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-g4cbx sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-354053 describe pod busybox-mount mysql-58ccfd96bb-g4cbx sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-354053 describe pod busybox-mount mysql-58ccfd96bb-g4cbx sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-354053/192.168.39.247
	Start Time:       Mon, 27 Jan 2025 14:23:29 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://ebd4e409d6eefdec47b888058c980cf8fb0cc0aa26f8edc81c5770ee098c8ac4
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 27 Jan 2025 14:23:56 +0000
	      Finished:     Mon, 27 Jan 2025 14:23:56 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p8h8w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-p8h8w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  2m56s  default-scheduler  Successfully assigned default/busybox-mount to functional-354053
	  Normal  Pulling    2m56s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2m30s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.205s (25.643s including waiting). Image size: 4631262 bytes.
	  Normal  Created    2m30s  kubelet            Created container: mount-munger
	  Normal  Started    2m30s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-g4cbx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-354053/192.168.39.247
	Start Time:       Mon, 27 Jan 2025 14:23:29 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lxf6h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lxf6h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m56s                default-scheduler  Successfully assigned default/mysql-58ccfd96bb-g4cbx to functional-354053
	  Warning  Failed     89s                  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    74s (x2 over 2m56s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     20s (x2 over 89s)    kubelet            Error: ErrImagePull
	  Warning  Failed     20s                  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    8s (x2 over 89s)     kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     8s (x2 over 89s)     kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-354053/192.168.39.247
	Start Time:       Mon, 27 Jan 2025 14:23:23 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9t22r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-9t22r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-354053
	  Warning  Failed     2m32s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     51s (x2 over 2m32s)  kubelet            Error: ErrImagePull
	  Warning  Failed     51s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    40s (x2 over 2m31s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     40s (x2 over 2m31s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    25s (x3 over 3m2s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
E0127 14:26:50.098069 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:29:06.238243 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:29:33.940078 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (188.12s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (603.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-354053 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-g4cbx" [19337626-b84b-4816-95ce-09221be5167e] Pending
helpers_test.go:344: "mysql-58ccfd96bb-g4cbx" [19337626-b84b-4816-95ce-09221be5167e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1799: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1799: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-354053 -n functional-354053
functional_test.go:1799: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-01-27 14:33:30.187911119 +0000 UTC m=+1681.860705279
functional_test.go:1799: (dbg) Run:  kubectl --context functional-354053 describe po mysql-58ccfd96bb-g4cbx -n default
functional_test.go:1799: (dbg) kubectl --context functional-354053 describe po mysql-58ccfd96bb-g4cbx -n default:
Name:             mysql-58ccfd96bb-g4cbx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-354053/192.168.39.247
Start Time:       Mon, 27 Jan 2025 14:23:29 +0000
Labels:           app=mysql
pod-template-hash=58ccfd96bb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/mysql-58ccfd96bb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lxf6h (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lxf6h:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-58ccfd96bb-g4cbx to functional-354053
Warning  Failed     5m2s (x2 over 8m33s)   kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    3m38s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     2m39s (x5 over 8m33s)  kubelet            Error: ErrImagePull
Warning  Failed     2m39s (x3 over 7m24s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     66s (x16 over 8m33s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    2s (x21 over 8m33s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1799: (dbg) Run:  kubectl --context functional-354053 logs mysql-58ccfd96bb-g4cbx -n default
functional_test.go:1799: (dbg) Non-zero exit: kubectl --context functional-354053 logs mysql-58ccfd96bb-g4cbx -n default: exit status 1 (71.978979ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-58ccfd96bb-g4cbx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1799: kubectl --context functional-354053 logs mysql-58ccfd96bb-g4cbx -n default: exit status 1
functional_test.go:1801: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-354053 -n functional-354053
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-354053 logs -n 25: (1.50387936s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-354053 ssh stat                                               | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:23 UTC | 27 Jan 25 14:23 UTC |
	|                | /mount-9p/created-by-pod                                                 |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh sudo                                               | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:23 UTC | 27 Jan 25 14:23 UTC |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh findmnt                                            | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:23 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-354053                                                     | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:23 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdspecific-port3274950700/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh findmnt                                            | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:23 UTC | 27 Jan 25 14:23 UTC |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh -- ls                                              | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh sudo                                               | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC |                     |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-354053                                                     | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup23558551/001:/mount1     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh findmnt                                            | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC |                     |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-354053                                                     | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup23558551/001:/mount3     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-354053                                                     | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup23558551/001:/mount2     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh findmnt                                            | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh findmnt                                            | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh findmnt                                            | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-354053                                                     | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC |                     |
	|                | --kill=true                                                              |                   |         |         |                     |                     |
	| update-context | functional-354053                                                        | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-354053                                                        | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-354053                                                        | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| image          | functional-354053                                                        | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | image ls --format short                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-354053                                                        | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | image ls --format yaml                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-354053 ssh pgrep                                              | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC |                     |
	|                | buildkitd                                                                |                   |         |         |                     |                     |
	| image          | functional-354053 image build -t                                         | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | localhost/my-image:functional-354053                                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                         |                   |         |         |                     |                     |
	| image          | functional-354053 image ls                                               | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	| image          | functional-354053                                                        | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | image ls --format json                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-354053                                                        | functional-354053 | jenkins | v1.35.0 | 27 Jan 25 14:24 UTC | 27 Jan 25 14:24 UTC |
	|                | image ls --format table                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:23:29
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:23:29.389880 1025700 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:23:29.390074 1025700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:23:29.390091 1025700 out.go:358] Setting ErrFile to fd 2...
	I0127 14:23:29.390098 1025700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:23:29.390528 1025700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 14:23:29.391414 1025700 out.go:352] Setting JSON to false
	I0127 14:23:29.392962 1025700 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":18356,"bootTime":1737969453,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:23:29.393100 1025700 start.go:139] virtualization: kvm guest
	I0127 14:23:29.395451 1025700 out.go:177] * [functional-354053] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0127 14:23:29.396819 1025700 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 14:23:29.396840 1025700 notify.go:220] Checking for updates...
	I0127 14:23:29.399233 1025700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:23:29.400589 1025700 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 14:23:29.401838 1025700 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 14:23:29.403108 1025700 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:23:29.404395 1025700 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:23:29.406227 1025700 config.go:182] Loaded profile config "functional-354053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:23:29.406818 1025700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:23:29.406888 1025700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:23:29.423313 1025700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37767
	I0127 14:23:29.423928 1025700 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:23:29.424678 1025700 main.go:141] libmachine: Using API Version  1
	I0127 14:23:29.424706 1025700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:23:29.425141 1025700 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:23:29.425319 1025700 main.go:141] libmachine: (functional-354053) Calling .DriverName
	I0127 14:23:29.425574 1025700 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:23:29.425916 1025700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:23:29.425955 1025700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:23:29.443236 1025700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0127 14:23:29.443854 1025700 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:23:29.444488 1025700 main.go:141] libmachine: Using API Version  1
	I0127 14:23:29.444505 1025700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:23:29.444866 1025700 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:23:29.445060 1025700 main.go:141] libmachine: (functional-354053) Calling .DriverName
	I0127 14:23:29.483470 1025700 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0127 14:23:29.484803 1025700 start.go:297] selected driver: kvm2
	I0127 14:23:29.484822 1025700 start.go:901] validating driver "kvm2" against &{Name:functional-354053 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-354053 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:23:29.484970 1025700 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:23:29.486981 1025700 out.go:201] 
	W0127 14:23:29.488336 1025700 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 14:23:29.489548 1025700 out.go:201] 
	
	
	==> CRI-O <==
	Jan 27 14:33:30 functional-354053 crio[4480]: time="2025-01-27 14:33:30.974000557Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988410973977483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de9539b6-421e-492c-92ab-cd14648f1a9a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:33:30 functional-354053 crio[4480]: time="2025-01-27 14:33:30.974640476Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=382eff59-c4c8-4707-a4ec-1b005a091d08 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:33:30 functional-354053 crio[4480]: time="2025-01-27 14:33:30.974715960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=382eff59-c4c8-4707-a4ec-1b005a091d08 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:33:30 functional-354053 crio[4480]: time="2025-01-27 14:33:30.975038759Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ee37b4b3e1206b31c449040e80b73ec786c7575b837921bf1242cab7cb60b8b,PodSandboxId:664ccc8e4bf91776b49e3cc415822dd47cd971051cfc4e4db26eb67504cbd0af,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1737987904752678294,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-pbcgz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 31713492-60ad-4c3e-90c3-be2da436ea5c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e955bb9507d0eac7190c0da297cbc04480c7200392e8caeb8d1626977013ba,PodSandboxId:3b0bf1ef5fb97549c9fab9852517c20547cfb24132dd552ae9180ef9f5b3d4f5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737987902486883373,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-xhp74,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 704924aa-bf67-4686-8c97-b3118c26e0f0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd4e409d6eefdec47b888058c980cf8fb0cc0aa26f8edc81c5770ee098c8ac4,PodSandboxId:6d084a4e0a7562819fb18cd95399fa2293b8bccae340313ec95ca1ddc2244e26,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1737987836167444039,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28522223-1c9a-40dc-bf05-aba4db084b30,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2551b6c588eed8a3b8b23c06fc2ec4706de77d7c583f49c533e52091c1c49fa1,PodSandboxId:eb3f91549bacca05f1108f082b0674e027799122c3778424bc020587ae0fb1ab,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737987801555935295,Labels:map[string]string{io.kubernetes.container.name: echoserver,
io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-krgns,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 545ea35f-d082-4ec6-9395-474eb765ae58,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:088a91a7df0414c1a3a02771f28933f64953d9c2d6e9684858b3f0c40b33ab2a,PodSandboxId:ba0939bae8872239a4ff9833a4e95f955858d282a61a18ccfb830c26fb6c83d4,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737987801162275372,Labels:map[string]string{io.kubernetes.container.n
ame: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-bfvhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c90a774f-43ce-446b-a2ce-7cbbca5e3a7a,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d939964a23b588fc22557a3233a68b212e578d52c63d80d2e38c98cab05b3f6,PodSandboxId:5977cb792beeff187994617a69abc26c5d7c262af18a86619f94de40809e2efc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737987770235192340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernet
es.pod.name: kube-proxy-9lpvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d25c7a8-9825-48c9-aff4-4fb02fc71c7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20104e62b9307e210fd6e9aa7a97f74173dbd8adc8afe89de39953becacfe1c,PodSandboxId:06b99cc53ca9455b69e4ac729b8b5fd2f97000125e752415c050ac9eee2b4049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737987770225690694,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: st
orage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8134de1b-9f43-48b2-8405-d2306418e7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f24c678c7f9d9c39421a32f8418b4445c7fc92673ed181f5405eee86803e23b,PodSandboxId:605be2b4fca27d8f288ec1936c77ebcae88139fec3db8f4b03e12e58636bb746,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737987770257383331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-clss9,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: e6032c71-a225-4356-a049-706a54647858,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1be836193e750992cc427af8877b6210c3a50f284fa67527699cfb60e059a2b,PodSandboxId:ee8ad53c1b54aaad50b42fd57fbbc2746dc99d0b2c0a05df90ca4db7be9f4f96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe555639
4553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737987766843389042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5787c60ce7e171df80e78196dc5b8f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82364473179469ac74f8e047e8ae81e3af69d89fba5b26c969befa42e4933d5,PodSandboxId:79952744037ab04ab9ec610c549d1fe95dad8d65b4560d54537ce60c0718f773,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,
State:CONTAINER_RUNNING,CreatedAt:1737987766566078340,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ba205bcbaa29776e909c62b24a71b0,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52e2ad3382187b41e54263d72ba6dab790425fdb473a31cb2ecded83e9dc2f7,PodSandboxId:d793b58819561a0fa29dbf49acc0bee94637f80d36f883a528c915b77c80bf9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,
CreatedAt:1737987766538550016,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c422b2dd8f3b2fd158b86d54699f7a17,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc44fda3c14169ac204404e7003d53f7bd7782f90e33465e1630238f20b39826,PodSandboxId:2873370561f95655e7421a19912654d46f41e4a5fa328c1ff2f18a9510ffef4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNN
ING,CreatedAt:1737987766547779949,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b75a45a5acc0196c4b6709b05ee255d,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2f4f2849cce35fc41f756c6da674d8621635d2ff72754445d0651b349cd217,PodSandboxId:3a721b6e824c280553e8625bd73c50ff912566f4dff5007da9a973ca13965652,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737987
726930717881,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-clss9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6032c71-a225-4356-a049-706a54647858,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2294b1739725ad9aaed8938344fee7686c22d5e688d1c033dc314c65e2796ee7,PodSandboxId:dfb43d8b52480b6eb41c8e14de42c2d60c925c84051236068d60a5ddc601341b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867
d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737987726554040675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8134de1b-9f43-48b2-8405-d2306418e7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ea23fa5210bf04e54af9d9d70ee2db01c60d9b63ceef518f550c79358139c4,PodSandboxId:1b922d323340c2ae0af3fb932ae80735c7b393b731c39f27e4c6d31fbe23a4e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737987726519426974,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9lpvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d25c7a8-9825-48c9-aff4-4fb02fc71c7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edf77131a4418be5a48102c48f46cb91e7bc7d3081be0b382ca56fe73bc8ef5,PodSandboxId:93ff3e8cb8c68ff8d1544c4cfef99cccd42e58685ac5ea87d00a3dedb4a80cde,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737987722778400273,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ba205bcbaa29776e909c62b24a71b0,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc6e9a63f79fd3daaa4d44755466539f94dfc050b0db35c1a36fe4f00d97d8e8,PodSandboxId:7b0978f2754daf3f1253c9337cc9c54822f732d9e2b22ce505dd72f9cd985295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737987722742566957,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c422b2dd8f3b2fd158b86d54699f7a17,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dded0a0d154ae5b44fd7d73dcb5d3850e90162a1058d78c33d04cd9b6b92e8d,PodSandboxId:7f55af87ce837f53754cb3a676e67ac0a71d72a879155c8c95dfa6fab05d5dc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737987722742610386,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b75a45a5acc0196c4b6709b05ee255d,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=382eff59-c4c8-4707-a4ec-1b005a091d08 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.015650125Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6561e252-62eb-4f0d-a894-ada3401d6d26 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.015727581Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6561e252-62eb-4f0d-a894-ada3401d6d26 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.016788557Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2017b600-4cb9-4f2c-bb60-ec72d5071cb1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.017565845Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988411017544331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2017b600-4cb9-4f2c-bb60-ec72d5071cb1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.018258064Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=872fa364-59d0-40f9-9d05-9b3e18a76a1c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.018333452Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=872fa364-59d0-40f9-9d05-9b3e18a76a1c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.022433679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ee37b4b3e1206b31c449040e80b73ec786c7575b837921bf1242cab7cb60b8b,PodSandboxId:664ccc8e4bf91776b49e3cc415822dd47cd971051cfc4e4db26eb67504cbd0af,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1737987904752678294,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-pbcgz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 31713492-60ad-4c3e-90c3-be2da436ea5c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e955bb9507d0eac7190c0da297cbc04480c7200392e8caeb8d1626977013ba,PodSandboxId:3b0bf1ef5fb97549c9fab9852517c20547cfb24132dd552ae9180ef9f5b3d4f5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737987902486883373,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-xhp74,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 704924aa-bf67-4686-8c97-b3118c26e0f0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd4e409d6eefdec47b888058c980cf8fb0cc0aa26f8edc81c5770ee098c8ac4,PodSandboxId:6d084a4e0a7562819fb18cd95399fa2293b8bccae340313ec95ca1ddc2244e26,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1737987836167444039,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28522223-1c9a-40dc-bf05-aba4db084b30,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2551b6c588eed8a3b8b23c06fc2ec4706de77d7c583f49c533e52091c1c49fa1,PodSandboxId:eb3f91549bacca05f1108f082b0674e027799122c3778424bc020587ae0fb1ab,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737987801555935295,Labels:map[string]string{io.kubernetes.container.name: echoserver,
io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-krgns,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 545ea35f-d082-4ec6-9395-474eb765ae58,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:088a91a7df0414c1a3a02771f28933f64953d9c2d6e9684858b3f0c40b33ab2a,PodSandboxId:ba0939bae8872239a4ff9833a4e95f955858d282a61a18ccfb830c26fb6c83d4,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737987801162275372,Labels:map[string]string{io.kubernetes.container.n
ame: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-bfvhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c90a774f-43ce-446b-a2ce-7cbbca5e3a7a,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d939964a23b588fc22557a3233a68b212e578d52c63d80d2e38c98cab05b3f6,PodSandboxId:5977cb792beeff187994617a69abc26c5d7c262af18a86619f94de40809e2efc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737987770235192340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernet
es.pod.name: kube-proxy-9lpvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d25c7a8-9825-48c9-aff4-4fb02fc71c7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20104e62b9307e210fd6e9aa7a97f74173dbd8adc8afe89de39953becacfe1c,PodSandboxId:06b99cc53ca9455b69e4ac729b8b5fd2f97000125e752415c050ac9eee2b4049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737987770225690694,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: st
orage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8134de1b-9f43-48b2-8405-d2306418e7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f24c678c7f9d9c39421a32f8418b4445c7fc92673ed181f5405eee86803e23b,PodSandboxId:605be2b4fca27d8f288ec1936c77ebcae88139fec3db8f4b03e12e58636bb746,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737987770257383331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-clss9,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: e6032c71-a225-4356-a049-706a54647858,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1be836193e750992cc427af8877b6210c3a50f284fa67527699cfb60e059a2b,PodSandboxId:ee8ad53c1b54aaad50b42fd57fbbc2746dc99d0b2c0a05df90ca4db7be9f4f96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe555639
4553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737987766843389042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5787c60ce7e171df80e78196dc5b8f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82364473179469ac74f8e047e8ae81e3af69d89fba5b26c969befa42e4933d5,PodSandboxId:79952744037ab04ab9ec610c549d1fe95dad8d65b4560d54537ce60c0718f773,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,
State:CONTAINER_RUNNING,CreatedAt:1737987766566078340,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ba205bcbaa29776e909c62b24a71b0,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52e2ad3382187b41e54263d72ba6dab790425fdb473a31cb2ecded83e9dc2f7,PodSandboxId:d793b58819561a0fa29dbf49acc0bee94637f80d36f883a528c915b77c80bf9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,
CreatedAt:1737987766538550016,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c422b2dd8f3b2fd158b86d54699f7a17,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc44fda3c14169ac204404e7003d53f7bd7782f90e33465e1630238f20b39826,PodSandboxId:2873370561f95655e7421a19912654d46f41e4a5fa328c1ff2f18a9510ffef4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNN
ING,CreatedAt:1737987766547779949,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b75a45a5acc0196c4b6709b05ee255d,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2f4f2849cce35fc41f756c6da674d8621635d2ff72754445d0651b349cd217,PodSandboxId:3a721b6e824c280553e8625bd73c50ff912566f4dff5007da9a973ca13965652,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737987
726930717881,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-clss9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6032c71-a225-4356-a049-706a54647858,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2294b1739725ad9aaed8938344fee7686c22d5e688d1c033dc314c65e2796ee7,PodSandboxId:dfb43d8b52480b6eb41c8e14de42c2d60c925c84051236068d60a5ddc601341b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867
d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737987726554040675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8134de1b-9f43-48b2-8405-d2306418e7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ea23fa5210bf04e54af9d9d70ee2db01c60d9b63ceef518f550c79358139c4,PodSandboxId:1b922d323340c2ae0af3fb932ae80735c7b393b731c39f27e4c6d31fbe23a4e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737987726519426974,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9lpvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d25c7a8-9825-48c9-aff4-4fb02fc71c7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edf77131a4418be5a48102c48f46cb91e7bc7d3081be0b382ca56fe73bc8ef5,PodSandboxId:93ff3e8cb8c68ff8d1544c4cfef99cccd42e58685ac5ea87d00a3dedb4a80cde,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737987722778400273,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ba205bcbaa29776e909c62b24a71b0,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc6e9a63f79fd3daaa4d44755466539f94dfc050b0db35c1a36fe4f00d97d8e8,PodSandboxId:7b0978f2754daf3f1253c9337cc9c54822f732d9e2b22ce505dd72f9cd985295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737987722742566957,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c422b2dd8f3b2fd158b86d54699f7a17,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dded0a0d154ae5b44fd7d73dcb5d3850e90162a1058d78c33d04cd9b6b92e8d,PodSandboxId:7f55af87ce837f53754cb3a676e67ac0a71d72a879155c8c95dfa6fab05d5dc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737987722742610386,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b75a45a5acc0196c4b6709b05ee255d,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=872fa364-59d0-40f9-9d05-9b3e18a76a1c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.064363921Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f631792c-2bbd-4a1f-9f3c-e4c18eb6bb41 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.064452251Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f631792c-2bbd-4a1f-9f3c-e4c18eb6bb41 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.066222616Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=141b185c-1cd3-4458-8374-77cad062ac5c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.066861931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988411066840713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=141b185c-1cd3-4458-8374-77cad062ac5c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.067597243Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4ae2ca8-191a-4a72-9d93-2af14fb3973e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.067672186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4ae2ca8-191a-4a72-9d93-2af14fb3973e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.068011668Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ee37b4b3e1206b31c449040e80b73ec786c7575b837921bf1242cab7cb60b8b,PodSandboxId:664ccc8e4bf91776b49e3cc415822dd47cd971051cfc4e4db26eb67504cbd0af,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1737987904752678294,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-pbcgz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 31713492-60ad-4c3e-90c3-be2da436ea5c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e955bb9507d0eac7190c0da297cbc04480c7200392e8caeb8d1626977013ba,PodSandboxId:3b0bf1ef5fb97549c9fab9852517c20547cfb24132dd552ae9180ef9f5b3d4f5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737987902486883373,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-xhp74,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 704924aa-bf67-4686-8c97-b3118c26e0f0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd4e409d6eefdec47b888058c980cf8fb0cc0aa26f8edc81c5770ee098c8ac4,PodSandboxId:6d084a4e0a7562819fb18cd95399fa2293b8bccae340313ec95ca1ddc2244e26,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1737987836167444039,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28522223-1c9a-40dc-bf05-aba4db084b30,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2551b6c588eed8a3b8b23c06fc2ec4706de77d7c583f49c533e52091c1c49fa1,PodSandboxId:eb3f91549bacca05f1108f082b0674e027799122c3778424bc020587ae0fb1ab,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737987801555935295,Labels:map[string]string{io.kubernetes.container.name: echoserver,
io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-krgns,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 545ea35f-d082-4ec6-9395-474eb765ae58,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:088a91a7df0414c1a3a02771f28933f64953d9c2d6e9684858b3f0c40b33ab2a,PodSandboxId:ba0939bae8872239a4ff9833a4e95f955858d282a61a18ccfb830c26fb6c83d4,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737987801162275372,Labels:map[string]string{io.kubernetes.container.n
ame: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-bfvhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c90a774f-43ce-446b-a2ce-7cbbca5e3a7a,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d939964a23b588fc22557a3233a68b212e578d52c63d80d2e38c98cab05b3f6,PodSandboxId:5977cb792beeff187994617a69abc26c5d7c262af18a86619f94de40809e2efc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737987770235192340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernet
es.pod.name: kube-proxy-9lpvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d25c7a8-9825-48c9-aff4-4fb02fc71c7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20104e62b9307e210fd6e9aa7a97f74173dbd8adc8afe89de39953becacfe1c,PodSandboxId:06b99cc53ca9455b69e4ac729b8b5fd2f97000125e752415c050ac9eee2b4049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737987770225690694,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: st
orage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8134de1b-9f43-48b2-8405-d2306418e7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f24c678c7f9d9c39421a32f8418b4445c7fc92673ed181f5405eee86803e23b,PodSandboxId:605be2b4fca27d8f288ec1936c77ebcae88139fec3db8f4b03e12e58636bb746,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737987770257383331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-clss9,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: e6032c71-a225-4356-a049-706a54647858,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1be836193e750992cc427af8877b6210c3a50f284fa67527699cfb60e059a2b,PodSandboxId:ee8ad53c1b54aaad50b42fd57fbbc2746dc99d0b2c0a05df90ca4db7be9f4f96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe555639
4553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737987766843389042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5787c60ce7e171df80e78196dc5b8f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82364473179469ac74f8e047e8ae81e3af69d89fba5b26c969befa42e4933d5,PodSandboxId:79952744037ab04ab9ec610c549d1fe95dad8d65b4560d54537ce60c0718f773,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,
State:CONTAINER_RUNNING,CreatedAt:1737987766566078340,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ba205bcbaa29776e909c62b24a71b0,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52e2ad3382187b41e54263d72ba6dab790425fdb473a31cb2ecded83e9dc2f7,PodSandboxId:d793b58819561a0fa29dbf49acc0bee94637f80d36f883a528c915b77c80bf9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,
CreatedAt:1737987766538550016,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c422b2dd8f3b2fd158b86d54699f7a17,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc44fda3c14169ac204404e7003d53f7bd7782f90e33465e1630238f20b39826,PodSandboxId:2873370561f95655e7421a19912654d46f41e4a5fa328c1ff2f18a9510ffef4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNN
ING,CreatedAt:1737987766547779949,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b75a45a5acc0196c4b6709b05ee255d,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2f4f2849cce35fc41f756c6da674d8621635d2ff72754445d0651b349cd217,PodSandboxId:3a721b6e824c280553e8625bd73c50ff912566f4dff5007da9a973ca13965652,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737987
726930717881,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-clss9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6032c71-a225-4356-a049-706a54647858,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2294b1739725ad9aaed8938344fee7686c22d5e688d1c033dc314c65e2796ee7,PodSandboxId:dfb43d8b52480b6eb41c8e14de42c2d60c925c84051236068d60a5ddc601341b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867
d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737987726554040675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8134de1b-9f43-48b2-8405-d2306418e7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ea23fa5210bf04e54af9d9d70ee2db01c60d9b63ceef518f550c79358139c4,PodSandboxId:1b922d323340c2ae0af3fb932ae80735c7b393b731c39f27e4c6d31fbe23a4e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737987726519426974,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9lpvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d25c7a8-9825-48c9-aff4-4fb02fc71c7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edf77131a4418be5a48102c48f46cb91e7bc7d3081be0b382ca56fe73bc8ef5,PodSandboxId:93ff3e8cb8c68ff8d1544c4cfef99cccd42e58685ac5ea87d00a3dedb4a80cde,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737987722778400273,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ba205bcbaa29776e909c62b24a71b0,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc6e9a63f79fd3daaa4d44755466539f94dfc050b0db35c1a36fe4f00d97d8e8,PodSandboxId:7b0978f2754daf3f1253c9337cc9c54822f732d9e2b22ce505dd72f9cd985295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737987722742566957,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c422b2dd8f3b2fd158b86d54699f7a17,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dded0a0d154ae5b44fd7d73dcb5d3850e90162a1058d78c33d04cd9b6b92e8d,PodSandboxId:7f55af87ce837f53754cb3a676e67ac0a71d72a879155c8c95dfa6fab05d5dc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737987722742610386,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b75a45a5acc0196c4b6709b05ee255d,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4ae2ca8-191a-4a72-9d93-2af14fb3973e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.106282948Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=04b03202-545b-4f4f-afda-2f9d7bf43df8 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.106353758Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=04b03202-545b-4f4f-afda-2f9d7bf43df8 name=/runtime.v1.RuntimeService/Version
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.107727788Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78144652-bea3-4110-b671-74e06f35f68a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.108508586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988411108484861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78144652-bea3-4110-b671-74e06f35f68a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.109085352Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f5c871a-8888-45d9-b084-9b994e57358a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.109187644Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f5c871a-8888-45d9-b084-9b994e57358a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 14:33:31 functional-354053 crio[4480]: time="2025-01-27 14:33:31.109545070Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ee37b4b3e1206b31c449040e80b73ec786c7575b837921bf1242cab7cb60b8b,PodSandboxId:664ccc8e4bf91776b49e3cc415822dd47cd971051cfc4e4db26eb67504cbd0af,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1737987904752678294,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-pbcgz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 31713492-60ad-4c3e-90c3-be2da436ea5c,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e955bb9507d0eac7190c0da297cbc04480c7200392e8caeb8d1626977013ba,PodSandboxId:3b0bf1ef5fb97549c9fab9852517c20547cfb24132dd552ae9180ef9f5b3d4f5,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737987902486883373,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-xhp74,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 704924aa-bf67-4686-8c97-b3118c26e0f0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd4e409d6eefdec47b888058c980cf8fb0cc0aa26f8edc81c5770ee098c8ac4,PodSandboxId:6d084a4e0a7562819fb18cd95399fa2293b8bccae340313ec95ca1ddc2244e26,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1737987836167444039,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28522223-1c9a-40dc-bf05-aba4db084b30,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2551b6c588eed8a3b8b23c06fc2ec4706de77d7c583f49c533e52091c1c49fa1,PodSandboxId:eb3f91549bacca05f1108f082b0674e027799122c3778424bc020587ae0fb1ab,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737987801555935295,Labels:map[string]string{io.kubernetes.container.name: echoserver,
io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-krgns,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 545ea35f-d082-4ec6-9395-474eb765ae58,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:088a91a7df0414c1a3a02771f28933f64953d9c2d6e9684858b3f0c40b33ab2a,PodSandboxId:ba0939bae8872239a4ff9833a4e95f955858d282a61a18ccfb830c26fb6c83d4,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1737987801162275372,Labels:map[string]string{io.kubernetes.container.n
ame: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-bfvhm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c90a774f-43ce-446b-a2ce-7cbbca5e3a7a,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d939964a23b588fc22557a3233a68b212e578d52c63d80d2e38c98cab05b3f6,PodSandboxId:5977cb792beeff187994617a69abc26c5d7c262af18a86619f94de40809e2efc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737987770235192340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernet
es.pod.name: kube-proxy-9lpvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d25c7a8-9825-48c9-aff4-4fb02fc71c7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c20104e62b9307e210fd6e9aa7a97f74173dbd8adc8afe89de39953becacfe1c,PodSandboxId:06b99cc53ca9455b69e4ac729b8b5fd2f97000125e752415c050ac9eee2b4049,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737987770225690694,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: st
orage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8134de1b-9f43-48b2-8405-d2306418e7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f24c678c7f9d9c39421a32f8418b4445c7fc92673ed181f5405eee86803e23b,PodSandboxId:605be2b4fca27d8f288ec1936c77ebcae88139fec3db8f4b03e12e58636bb746,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737987770257383331,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-clss9,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: e6032c71-a225-4356-a049-706a54647858,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1be836193e750992cc427af8877b6210c3a50f284fa67527699cfb60e059a2b,PodSandboxId:ee8ad53c1b54aaad50b42fd57fbbc2746dc99d0b2c0a05df90ca4db7be9f4f96,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe555639
4553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737987766843389042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e5787c60ce7e171df80e78196dc5b8f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a82364473179469ac74f8e047e8ae81e3af69d89fba5b26c969befa42e4933d5,PodSandboxId:79952744037ab04ab9ec610c549d1fe95dad8d65b4560d54537ce60c0718f773,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,
State:CONTAINER_RUNNING,CreatedAt:1737987766566078340,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ba205bcbaa29776e909c62b24a71b0,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d52e2ad3382187b41e54263d72ba6dab790425fdb473a31cb2ecded83e9dc2f7,PodSandboxId:d793b58819561a0fa29dbf49acc0bee94637f80d36f883a528c915b77c80bf9a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,
CreatedAt:1737987766538550016,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c422b2dd8f3b2fd158b86d54699f7a17,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc44fda3c14169ac204404e7003d53f7bd7782f90e33465e1630238f20b39826,PodSandboxId:2873370561f95655e7421a19912654d46f41e4a5fa328c1ff2f18a9510ffef4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNN
ING,CreatedAt:1737987766547779949,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b75a45a5acc0196c4b6709b05ee255d,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2f4f2849cce35fc41f756c6da674d8621635d2ff72754445d0651b349cd217,PodSandboxId:3a721b6e824c280553e8625bd73c50ff912566f4dff5007da9a973ca13965652,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737987
726930717881,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-clss9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6032c71-a225-4356-a049-706a54647858,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2294b1739725ad9aaed8938344fee7686c22d5e688d1c033dc314c65e2796ee7,PodSandboxId:dfb43d8b52480b6eb41c8e14de42c2d60c925c84051236068d60a5ddc601341b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867
d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737987726554040675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8134de1b-9f43-48b2-8405-d2306418e7bd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ea23fa5210bf04e54af9d9d70ee2db01c60d9b63ceef518f550c79358139c4,PodSandboxId:1b922d323340c2ae0af3fb932ae80735c7b393b731c39f27e4c6d31fbe23a4e3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737987726519426974,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9lpvn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d25c7a8-9825-48c9-aff4-4fb02fc71c7b,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9edf77131a4418be5a48102c48f46cb91e7bc7d3081be0b382ca56fe73bc8ef5,PodSandboxId:93ff3e8cb8c68ff8d1544c4cfef99cccd42e58685ac5ea87d00a3dedb4a80cde,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737987722778400273,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ba205bcbaa29776e909c62b24a71b0,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc6e9a63f79fd3daaa4d44755466539f94dfc050b0db35c1a36fe4f00d97d8e8,PodSandboxId:7b0978f2754daf3f1253c9337cc9c54822f732d9e2b22ce505dd72f9cd985295,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737987722742566957,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c422b2dd8f3b2fd158b86d54699f7a17,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dded0a0d154ae5b44fd7d73dcb5d3850e90162a1058d78c33d04cd9b6b92e8d,PodSandboxId:7f55af87ce837f53754cb3a676e67ac0a71d72a879155c8c95dfa6fab05d5dc4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737987722742610386,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-354053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b75a45a5acc0196c4b6709b05ee255d,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f5c871a-8888-45d9-b084-9b994e57358a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	5ee37b4b3e120       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   8 minutes ago       Running             dashboard-metrics-scraper   0                   664ccc8e4bf91       dashboard-metrics-scraper-5d59dccf9b-pbcgz
	c0e955bb9507d       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         8 minutes ago       Running             kubernetes-dashboard        0                   3b0bf1ef5fb97       kubernetes-dashboard-7779f9b69b-xhp74
	ebd4e409d6eef       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              9 minutes ago       Exited              mount-munger                0                   6d084a4e0a756       busybox-mount
	2551b6c588eed       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 10 minutes ago      Running             echoserver                  0                   eb3f91549bacc       hello-node-connect-58f9cf68d8-krgns
	088a91a7df041       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   ba0939bae8872       hello-node-fcfd88b6f-bfvhm
	2f24c678c7f9d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 10 minutes ago      Running             coredns                     2                   605be2b4fca27       coredns-668d6bf9bc-clss9
	4d939964a23b5       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                 10 minutes ago      Running             kube-proxy                  2                   5977cb792beef       kube-proxy-9lpvn
	c20104e62b930       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   06b99cc53ca94       storage-provisioner
	a1be836193e75       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                 10 minutes ago      Running             kube-apiserver              0                   ee8ad53c1b54a       kube-apiserver-functional-354053
	a823644731794       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 10 minutes ago      Running             etcd                        2                   79952744037ab       etcd-functional-354053
	cc44fda3c1416       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                 10 minutes ago      Running             kube-scheduler              2                   2873370561f95       kube-scheduler-functional-354053
	d52e2ad338218       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                 10 minutes ago      Running             kube-controller-manager     2                   d793b58819561       kube-controller-manager-functional-354053
	1c2f4f2849cce       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 11 minutes ago      Exited              coredns                     1                   3a721b6e824c2       coredns-668d6bf9bc-clss9
	2294b1739725a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   dfb43d8b52480       storage-provisioner
	49ea23fa5210b       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                 11 minutes ago      Exited              kube-proxy                  1                   1b922d323340c       kube-proxy-9lpvn
	9edf77131a441       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 11 minutes ago      Exited              etcd                        1                   93ff3e8cb8c68       etcd-functional-354053
	9dded0a0d154a       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                 11 minutes ago      Exited              kube-scheduler              1                   7f55af87ce837       kube-scheduler-functional-354053
	cc6e9a63f79fd       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                 11 minutes ago      Exited              kube-controller-manager     1                   7b0978f2754da       kube-controller-manager-functional-354053
	
	
	==> coredns [1c2f4f2849cce35fc41f756c6da674d8621635d2ff72754445d0651b349cd217] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53534 - 46475 "HINFO IN 2217809593452091882.4535506307984023863. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.05471968s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [2f24c678c7f9d9c39421a32f8418b4445c7fc92673ed181f5405eee86803e23b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58446 - 37101 "HINFO IN 4080665393069892780.2579021628035971684. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034193208s
	
	
	==> describe nodes <==
	Name:               functional-354053
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-354053
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743
	                    minikube.k8s.io/name=functional-354053
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T14_21_29_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 14:21:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-354053
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 14:33:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 14:31:29 +0000   Mon, 27 Jan 2025 14:21:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 14:31:29 +0000   Mon, 27 Jan 2025 14:21:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 14:31:29 +0000   Mon, 27 Jan 2025 14:21:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 14:31:29 +0000   Mon, 27 Jan 2025 14:21:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.247
	  Hostname:    functional-354053
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 e12d017be01f45b180b4abb500a508a3
	  System UUID:                e12d017b-e01f-45b1-80b4-abb500a508a3
	  Boot ID:                    5b694413-5e72-428f-b29a-a2aa7c86698b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-krgns           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-fcfd88b6f-bfvhm                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-58ccfd96bb-g4cbx                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-668d6bf9bc-clss9                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 etcd-functional-354053                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-354053              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-354053     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-9lpvn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-354053              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-pbcgz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-xhp74         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-354053 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-354053 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-354053 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeReady                12m                kubelet          Node functional-354053 status is now: NodeReady
	  Normal  RegisteredNode           11m                node-controller  Node functional-354053 event: Registered Node functional-354053 in Controller
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-354053 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-354053 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-354053 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-354053 event: Registered Node functional-354053 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x9 over 10m)  kubelet          Node functional-354053 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x7 over 10m)  kubelet          Node functional-354053 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-354053 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-354053 event: Registered Node functional-354053 in Controller
	
	
	==> dmesg <==
	[  +0.141986] systemd-fstab-generator[2471]: Ignoring "noauto" option for root device
	[  +0.278030] systemd-fstab-generator[2499]: Ignoring "noauto" option for root device
	[  +7.064631] systemd-fstab-generator[2626]: Ignoring "noauto" option for root device
	[  +0.074400] kauditd_printk_skb: 100 callbacks suppressed
	[Jan27 14:22] systemd-fstab-generator[2750]: Ignoring "noauto" option for root device
	[  +4.566235] kauditd_printk_skb: 74 callbacks suppressed
	[ +16.702778] systemd-fstab-generator[3555]: Ignoring "noauto" option for root device
	[  +0.091214] kauditd_printk_skb: 37 callbacks suppressed
	[ +18.027041] systemd-fstab-generator[4405]: Ignoring "noauto" option for root device
	[  +0.073098] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.052071] systemd-fstab-generator[4417]: Ignoring "noauto" option for root device
	[  +0.198233] systemd-fstab-generator[4431]: Ignoring "noauto" option for root device
	[  +0.125651] systemd-fstab-generator[4443]: Ignoring "noauto" option for root device
	[  +0.284373] systemd-fstab-generator[4471]: Ignoring "noauto" option for root device
	[  +0.782621] systemd-fstab-generator[4592]: Ignoring "noauto" option for root device
	[  +2.936487] systemd-fstab-generator[5093]: Ignoring "noauto" option for root device
	[  +0.700008] kauditd_printk_skb: 200 callbacks suppressed
	[  +6.473568] kauditd_printk_skb: 41 callbacks suppressed
	[Jan27 14:23] systemd-fstab-generator[5679]: Ignoring "noauto" option for root device
	[  +6.499783] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.468589] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.038353] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.846188] kauditd_printk_skb: 2 callbacks suppressed
	[ +26.327652] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 14:24] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [9edf77131a4418be5a48102c48f46cb91e7bc7d3081be0b382ca56fe73bc8ef5] <==
	{"level":"info","ts":"2025-01-27T14:22:04.459254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-27T14:22:04.459270Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 received MsgPreVoteResp from b60ca5935c0b4769 at term 2"}
	{"level":"info","ts":"2025-01-27T14:22:04.459295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 became candidate at term 3"}
	{"level":"info","ts":"2025-01-27T14:22:04.459302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 received MsgVoteResp from b60ca5935c0b4769 at term 3"}
	{"level":"info","ts":"2025-01-27T14:22:04.459310Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b60ca5935c0b4769 became leader at term 3"}
	{"level":"info","ts":"2025-01-27T14:22:04.459316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b60ca5935c0b4769 elected leader b60ca5935c0b4769 at term 3"}
	{"level":"info","ts":"2025-01-27T14:22:04.464738Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"b60ca5935c0b4769","local-member-attributes":"{Name:functional-354053 ClientURLs:[https://192.168.39.247:2379]}","request-path":"/0/members/b60ca5935c0b4769/attributes","cluster-id":"7fda2fc0436a8884","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T14:22:04.464862Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T14:22:04.465621Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T14:22:04.466355Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.247:2379"}
	{"level":"info","ts":"2025-01-27T14:22:04.466610Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T14:22:04.467022Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T14:22:04.467572Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T14:22:04.470220Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T14:22:04.470253Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T14:22:35.047774Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-01-27T14:22:35.047833Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-354053","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.247:2380"],"advertise-client-urls":["https://192.168.39.247:2379"]}
	{"level":"warn","ts":"2025-01-27T14:22:35.047950Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.247:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-01-27T14:22:35.047981Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.247:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-01-27T14:22:35.048089Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-01-27T14:22:35.048210Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-01-27T14:22:35.095494Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b60ca5935c0b4769","current-leader-member-id":"b60ca5935c0b4769"}
	{"level":"info","ts":"2025-01-27T14:22:35.099462Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.39.247:2380"}
	{"level":"info","ts":"2025-01-27T14:22:35.099573Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.39.247:2380"}
	{"level":"info","ts":"2025-01-27T14:22:35.099682Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-354053","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.247:2380"],"advertise-client-urls":["https://192.168.39.247:2379"]}
	
	
	==> etcd [a82364473179469ac74f8e047e8ae81e3af69d89fba5b26c969befa42e4933d5] <==
	{"level":"info","ts":"2025-01-27T14:22:48.055495Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T14:22:48.055701Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T14:22:48.055731Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T14:22:48.055527Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T14:22:48.056480Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T14:22:48.057034Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.247:2379"}
	{"level":"info","ts":"2025-01-27T14:22:48.058442Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T14:22:48.059369Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T14:23:29.815193Z","caller":"traceutil/trace.go:171","msg":"trace[539507025] linearizableReadLoop","detail":"{readStateIndex:774; appliedIndex:773; }","duration":"243.147493ms","start":"2025-01-27T14:23:29.571950Z","end":"2025-01-27T14:23:29.815097Z","steps":["trace[539507025] 'read index received'  (duration: 243.032343ms)","trace[539507025] 'applied index is now lower than readState.Index'  (duration: 113.241µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T14:23:29.815509Z","caller":"traceutil/trace.go:171","msg":"trace[671407638] transaction","detail":"{read_only:false; response_revision:703; number_of_response:1; }","duration":"252.938183ms","start":"2025-01-27T14:23:29.562561Z","end":"2025-01-27T14:23:29.815499Z","steps":["trace[671407638] 'process raft request'  (duration: 252.458232ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:23:29.815911Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.899123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:23:29.819035Z","caller":"traceutil/trace.go:171","msg":"trace[1766627855] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:703; }","duration":"247.097504ms","start":"2025-01-27T14:23:29.571924Z","end":"2025-01-27T14:23:29.819021Z","steps":["trace[1766627855] 'agreement among raft nodes before linearized reading'  (duration: 243.837488ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:23:29.816036Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.69343ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" limit:1 ","response":"range_response_count:1 size:63988"}
	{"level":"info","ts":"2025-01-27T14:23:29.820867Z","caller":"traceutil/trace.go:171","msg":"trace[1751903035] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:703; }","duration":"245.538066ms","start":"2025-01-27T14:23:29.575313Z","end":"2025-01-27T14:23:29.820851Z","steps":["trace[1751903035] 'agreement among raft nodes before linearized reading'  (duration: 240.64397ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:23:29.816065Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.566306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-01-27T14:23:29.816080Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.815078ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:23:29.821948Z","caller":"traceutil/trace.go:171","msg":"trace[996813919] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:703; }","duration":"129.694172ms","start":"2025-01-27T14:23:29.692244Z","end":"2025-01-27T14:23:29.821938Z","steps":["trace[996813919] 'agreement among raft nodes before linearized reading'  (duration: 123.827309ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:23:29.822285Z","caller":"traceutil/trace.go:171","msg":"trace[350038628] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:703; }","duration":"113.799057ms","start":"2025-01-27T14:23:29.708477Z","end":"2025-01-27T14:23:29.822276Z","steps":["trace[350038628] 'agreement among raft nodes before linearized reading'  (duration: 107.578391ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:25:02.307591Z","caller":"traceutil/trace.go:171","msg":"trace[1169481583] transaction","detail":"{read_only:false; response_revision:882; number_of_response:1; }","duration":"268.007777ms","start":"2025-01-27T14:25:02.039547Z","end":"2025-01-27T14:25:02.307555Z","steps":["trace[1169481583] 'process raft request'  (duration: 267.865918ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:25:02.308257Z","caller":"traceutil/trace.go:171","msg":"trace[1517727960] linearizableReadLoop","detail":"{readStateIndex:974; appliedIndex:974; }","duration":"179.892398ms","start":"2025-01-27T14:25:02.128354Z","end":"2025-01-27T14:25:02.308246Z","steps":["trace[1517727960] 'read index received'  (duration: 179.889338ms)","trace[1517727960] 'applied index is now lower than readState.Index'  (duration: 2.416µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T14:25:02.308440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.069777ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:25:02.308511Z","caller":"traceutil/trace.go:171","msg":"trace[663188896] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:882; }","duration":"180.156228ms","start":"2025-01-27T14:25:02.128347Z","end":"2025-01-27T14:25:02.308504Z","steps":["trace[663188896] 'agreement among raft nodes before linearized reading'  (duration: 180.059382ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T14:32:48.082204Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1067}
	{"level":"info","ts":"2025-01-27T14:32:48.096317Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1067,"took":"13.744402ms","hash":2119594955,"current-db-size-bytes":3739648,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":1601536,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-01-27T14:32:48.096388Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2119594955,"revision":1067,"compact-revision":-1}
	
	
	==> kernel <==
	 14:33:31 up 12 min,  0 users,  load average: 0.36, 0.24, 0.19
	Linux functional-354053 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a1be836193e750992cc427af8877b6210c3a50f284fa67527699cfb60e059a2b] <==
	I0127 14:22:49.227504       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0127 14:22:49.229251       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 14:22:49.229290       1 policy_source.go:240] refreshing policies
	I0127 14:22:49.229360       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 14:22:49.229400       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0127 14:22:49.239053       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0127 14:22:49.250856       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0127 14:22:49.273703       1 cache.go:39] Caches are synced for autoregister controller
	I0127 14:22:49.279319       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 14:22:49.982334       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 14:22:50.122861       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 14:22:50.734045       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 14:22:50.771341       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 14:22:50.794054       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 14:22:50.800448       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 14:22:52.590825       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0127 14:22:52.788530       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 14:22:52.838792       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 14:23:12.506363       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.46.57"}
	I0127 14:23:16.924655       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.58.162"}
	I0127 14:23:19.972563       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.203.208"}
	I0127 14:23:29.845002       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.18.202"}
	I0127 14:23:30.932893       1 controller.go:615] quota admission added evaluator for: namespaces
	I0127 14:23:31.222390       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.148.87"}
	I0127 14:23:31.264075       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.94.200"}
	
	
	==> kube-controller-manager [cc6e9a63f79fd3daaa4d44755466539f94dfc050b0db35c1a36fe4f00d97d8e8] <==
	I0127 14:22:08.913762       1 shared_informer.go:320] Caches are synced for disruption
	I0127 14:22:08.923391       1 shared_informer.go:320] Caches are synced for attach detach
	I0127 14:22:08.923835       1 shared_informer.go:320] Caches are synced for persistent volume
	I0127 14:22:08.924017       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0127 14:22:08.924805       1 shared_informer.go:320] Caches are synced for cronjob
	I0127 14:22:08.924881       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 14:22:08.925237       1 shared_informer.go:320] Caches are synced for PVC protection
	I0127 14:22:08.925312       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0127 14:22:08.925354       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0127 14:22:08.925988       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0127 14:22:08.927195       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 14:22:08.927255       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0127 14:22:08.929309       1 shared_informer.go:320] Caches are synced for TTL
	I0127 14:22:08.931425       1 shared_informer.go:320] Caches are synced for deployment
	I0127 14:22:08.932364       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 14:22:08.933502       1 shared_informer.go:320] Caches are synced for service account
	I0127 14:22:08.935777       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 14:22:08.945079       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0127 14:22:08.945611       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 14:22:08.945762       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 14:22:08.945867       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 14:22:08.949388       1 shared_informer.go:320] Caches are synced for daemon sets
	I0127 14:22:08.954812       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 14:22:08.954913       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 14:22:08.960320       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	
	
	==> kube-controller-manager [d52e2ad3382187b41e54263d72ba6dab790425fdb473a31cb2ecded83e9dc2f7] <==
	I0127 14:23:31.133105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="27.890584ms"
	I0127 14:23:31.152439       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="19.151386ms"
	I0127 14:23:31.165242       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="32.445414ms"
	I0127 14:23:31.201431       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="48.844221ms"
	I0127 14:23:31.201528       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="56.914µs"
	I0127 14:23:31.206515       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="41.2227ms"
	I0127 14:23:31.206582       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="27.855µs"
	I0127 14:23:50.207281       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-354053"
	I0127 14:24:20.873467       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-354053"
	I0127 14:24:57.847437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="102.881µs"
	I0127 14:25:02.899338       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="11.99443ms"
	I0127 14:25:02.900450       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="103.661µs"
	I0127 14:25:04.917759       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="12.769876ms"
	I0127 14:25:04.919726       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="29.421µs"
	I0127 14:25:12.936963       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="55.467µs"
	I0127 14:25:21.986933       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-354053"
	I0127 14:26:18.939601       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="148.123µs"
	I0127 14:26:30.935908       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="40.231µs"
	I0127 14:27:26.937893       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="77.243µs"
	I0127 14:27:41.939984       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="48.929µs"
	I0127 14:28:42.934824       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="98.25µs"
	I0127 14:28:57.942715       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="42.181µs"
	I0127 14:31:05.938693       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="226.711µs"
	I0127 14:31:20.942553       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="62.126µs"
	I0127 14:31:29.392596       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-354053"
	
	
	==> kube-proxy [49ea23fa5210bf04e54af9d9d70ee2db01c60d9b63ceef518f550c79358139c4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 14:22:06.863740       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 14:22:06.877598       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.247"]
	E0127 14:22:06.877698       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 14:22:06.986302       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 14:22:06.986349       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 14:22:06.986372       1 server_linux.go:170] "Using iptables Proxier"
	I0127 14:22:06.991277       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 14:22:06.991562       1 server.go:497] "Version info" version="v1.32.1"
	I0127 14:22:06.991590       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:22:06.995696       1 config.go:199] "Starting service config controller"
	I0127 14:22:06.995761       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 14:22:06.995817       1 config.go:105] "Starting endpoint slice config controller"
	I0127 14:22:06.995834       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 14:22:06.996398       1 config.go:329] "Starting node config controller"
	I0127 14:22:06.996439       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 14:22:07.096222       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 14:22:07.096333       1 shared_informer.go:320] Caches are synced for service config
	I0127 14:22:07.096524       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [4d939964a23b588fc22557a3233a68b212e578d52c63d80d2e38c98cab05b3f6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 14:22:50.724557       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 14:22:50.741640       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.247"]
	E0127 14:22:50.741721       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 14:22:50.806387       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 14:22:50.806447       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 14:22:50.806469       1 server_linux.go:170] "Using iptables Proxier"
	I0127 14:22:50.809047       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 14:22:50.809320       1 server.go:497] "Version info" version="v1.32.1"
	I0127 14:22:50.809352       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:22:50.812751       1 config.go:199] "Starting service config controller"
	I0127 14:22:50.812801       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 14:22:50.812823       1 config.go:105] "Starting endpoint slice config controller"
	I0127 14:22:50.812827       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 14:22:50.813763       1 config.go:329] "Starting node config controller"
	I0127 14:22:50.813791       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 14:22:50.913373       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 14:22:50.913405       1 shared_informer.go:320] Caches are synced for service config
	I0127 14:22:50.914010       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9dded0a0d154ae5b44fd7d73dcb5d3850e90162a1058d78c33d04cd9b6b92e8d] <==
	I0127 14:22:04.069894       1 serving.go:386] Generated self-signed cert in-memory
	W0127 14:22:05.664682       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0127 14:22:05.664777       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 14:22:05.664800       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 14:22:05.664817       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 14:22:05.718736       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 14:22:05.720427       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:22:05.722595       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 14:22:05.722659       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 14:22:05.723275       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 14:22:05.722672       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 14:22:05.823957       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0127 14:22:35.069651       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cc44fda3c14169ac204404e7003d53f7bd7782f90e33465e1630238f20b39826] <==
	I0127 14:22:47.725403       1 serving.go:386] Generated self-signed cert in-memory
	W0127 14:22:49.192538       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0127 14:22:49.192647       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 14:22:49.192673       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 14:22:49.192686       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 14:22:49.227406       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 14:22:49.227441       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:22:49.230016       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 14:22:49.230421       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 14:22:49.231204       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 14:22:49.231257       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 14:22:49.331825       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 14:32:45 functional-354053 kubelet[5100]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 14:32:45 functional-354053 kubelet[5100]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 14:32:45 functional-354053 kubelet[5100]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 14:32:46 functional-354053 kubelet[5100]: E0127 14:32:46.045067    5100 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod2b75a45a5acc0196c4b6709b05ee255d/crio-7f55af87ce837f53754cb3a676e67ac0a71d72a879155c8c95dfa6fab05d5dc4: Error finding container 7f55af87ce837f53754cb3a676e67ac0a71d72a879155c8c95dfa6fab05d5dc4: Status 404 returned error can't find the container with id 7f55af87ce837f53754cb3a676e67ac0a71d72a879155c8c95dfa6fab05d5dc4
	Jan 27 14:32:46 functional-354053 kubelet[5100]: E0127 14:32:46.045638    5100 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd8ba205bcbaa29776e909c62b24a71b0/crio-93ff3e8cb8c68ff8d1544c4cfef99cccd42e58685ac5ea87d00a3dedb4a80cde: Error finding container 93ff3e8cb8c68ff8d1544c4cfef99cccd42e58685ac5ea87d00a3dedb4a80cde: Status 404 returned error can't find the container with id 93ff3e8cb8c68ff8d1544c4cfef99cccd42e58685ac5ea87d00a3dedb4a80cde
	Jan 27 14:32:46 functional-354053 kubelet[5100]: E0127 14:32:46.045970    5100 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod3d25c7a8-9825-48c9-aff4-4fb02fc71c7b/crio-1b922d323340c2ae0af3fb932ae80735c7b393b731c39f27e4c6d31fbe23a4e3: Error finding container 1b922d323340c2ae0af3fb932ae80735c7b393b731c39f27e4c6d31fbe23a4e3: Status 404 returned error can't find the container with id 1b922d323340c2ae0af3fb932ae80735c7b393b731c39f27e4c6d31fbe23a4e3
	Jan 27 14:32:46 functional-354053 kubelet[5100]: E0127 14:32:46.046341    5100 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod8134de1b-9f43-48b2-8405-d2306418e7bd/crio-dfb43d8b52480b6eb41c8e14de42c2d60c925c84051236068d60a5ddc601341b: Error finding container dfb43d8b52480b6eb41c8e14de42c2d60c925c84051236068d60a5ddc601341b: Status 404 returned error can't find the container with id dfb43d8b52480b6eb41c8e14de42c2d60c925c84051236068d60a5ddc601341b
	Jan 27 14:32:46 functional-354053 kubelet[5100]: E0127 14:32:46.046606    5100 manager.go:1116] Failed to create existing container: /kubepods/burstable/podc422b2dd8f3b2fd158b86d54699f7a17/crio-7b0978f2754daf3f1253c9337cc9c54822f732d9e2b22ce505dd72f9cd985295: Error finding container 7b0978f2754daf3f1253c9337cc9c54822f732d9e2b22ce505dd72f9cd985295: Status 404 returned error can't find the container with id 7b0978f2754daf3f1253c9337cc9c54822f732d9e2b22ce505dd72f9cd985295
	Jan 27 14:32:46 functional-354053 kubelet[5100]: E0127 14:32:46.046893    5100 manager.go:1116] Failed to create existing container: /kubepods/burstable/pode6032c71-a225-4356-a049-706a54647858/crio-3a721b6e824c280553e8625bd73c50ff912566f4dff5007da9a973ca13965652: Error finding container 3a721b6e824c280553e8625bd73c50ff912566f4dff5007da9a973ca13965652: Status 404 returned error can't find the container with id 3a721b6e824c280553e8625bd73c50ff912566f4dff5007da9a973ca13965652
	Jan 27 14:32:46 functional-354053 kubelet[5100]: E0127 14:32:46.212955    5100 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988366212710048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:32:46 functional-354053 kubelet[5100]: E0127 14:32:46.213000    5100 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988366212710048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:32:46 functional-354053 kubelet[5100]: E0127 14:32:46.921546    5100 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ebf05447-f691-4ade-96c9-00b397d988e5"
	Jan 27 14:32:48 functional-354053 kubelet[5100]: E0127 14:32:48.922504    5100 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-g4cbx" podUID="19337626-b84b-4816-95ce-09221be5167e"
	Jan 27 14:32:56 functional-354053 kubelet[5100]: E0127 14:32:56.215077    5100 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988376214777172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:32:56 functional-354053 kubelet[5100]: E0127 14:32:56.215482    5100 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988376214777172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:32:59 functional-354053 kubelet[5100]: E0127 14:32:59.921684    5100 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ebf05447-f691-4ade-96c9-00b397d988e5"
	Jan 27 14:33:01 functional-354053 kubelet[5100]: E0127 14:33:01.923105    5100 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-g4cbx" podUID="19337626-b84b-4816-95ce-09221be5167e"
	Jan 27 14:33:06 functional-354053 kubelet[5100]: E0127 14:33:06.217961    5100 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988386217687972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:33:06 functional-354053 kubelet[5100]: E0127 14:33:06.218000    5100 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988386217687972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:33:15 functional-354053 kubelet[5100]: E0127 14:33:15.924364    5100 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-g4cbx" podUID="19337626-b84b-4816-95ce-09221be5167e"
	Jan 27 14:33:16 functional-354053 kubelet[5100]: E0127 14:33:16.220219    5100 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988396219934760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:33:16 functional-354053 kubelet[5100]: E0127 14:33:16.220264    5100 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988396219934760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:33:26 functional-354053 kubelet[5100]: E0127 14:33:26.222424    5100 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988406221989089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:33:26 functional-354053 kubelet[5100]: E0127 14:33:26.222694    5100 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737988406221989089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 14:33:28 functional-354053 kubelet[5100]: E0127 14:33:28.922882    5100 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-g4cbx" podUID="19337626-b84b-4816-95ce-09221be5167e"
	
	
	==> kubernetes-dashboard [c0e955bb9507d0eac7190c0da297cbc04480c7200392e8caeb8d1626977013ba] <==
	2025/01/27 14:25:02 Using namespace: kubernetes-dashboard
	2025/01/27 14:25:02 Using in-cluster config to connect to apiserver
	2025/01/27 14:25:02 Using secret token for csrf signing
	2025/01/27 14:25:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/01/27 14:25:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/01/27 14:25:02 Successful initial request to the apiserver, version: v1.32.1
	2025/01/27 14:25:02 Generating JWE encryption key
	2025/01/27 14:25:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/01/27 14:25:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/01/27 14:25:02 Initializing JWE encryption key from synchronized object
	2025/01/27 14:25:02 Creating in-cluster Sidecar client
	2025/01/27 14:25:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 14:25:02 Serving insecurely on HTTP port: 9090
	2025/01/27 14:25:32 Successful request to sidecar
	2025/01/27 14:25:02 Starting overwatch
	
	
	==> storage-provisioner [2294b1739725ad9aaed8938344fee7686c22d5e688d1c033dc314c65e2796ee7] <==
	I0127 14:22:06.740062       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 14:22:06.765962       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 14:22:06.766095       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 14:22:24.170947       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 14:22:24.171792       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0f83e328-fa0c-4b8e-993c-da7207b298ef", APIVersion:"v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-354053_8e50e530-54e1-4a19-8081-afa083db3604 became leader
	I0127 14:22:24.173574       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-354053_8e50e530-54e1-4a19-8081-afa083db3604!
	I0127 14:22:24.274777       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-354053_8e50e530-54e1-4a19-8081-afa083db3604!
	
	
	==> storage-provisioner [c20104e62b9307e210fd6e9aa7a97f74173dbd8adc8afe89de39953becacfe1c] <==
	I0127 14:22:50.463444       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 14:22:50.526720       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 14:22:50.529438       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 14:23:07.942180       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 14:23:07.942602       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0f83e328-fa0c-4b8e-993c-da7207b298ef", APIVersion:"v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-354053_b7c745fe-c80e-4afb-be4c-154e4d4ee06c became leader
	I0127 14:23:07.942710       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-354053_b7c745fe-c80e-4afb-be4c-154e4d4ee06c!
	I0127 14:23:08.043308       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-354053_b7c745fe-c80e-4afb-be4c-154e4d4ee06c!
	I0127 14:23:23.498596       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0127 14:23:23.500003       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    d26bf790-8820-421d-a921-caa967471eec 325 0 2025-01-27 14:21:34 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-01-27 14:21:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-60217037-59a2-4af5-8713-657e81cbfb2a &PersistentVolumeClaim{ObjectMeta:{myclaim  default  60217037-59a2-4af5-8713-657e81cbfb2a 686 0 2025-01-27 14:23:23 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-01-27 14:23:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-01-27 14:23:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0127 14:23:23.500645       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-60217037-59a2-4af5-8713-657e81cbfb2a" provisioned
	I0127 14:23:23.500717       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0127 14:23:23.500743       1 volume_store.go:212] Trying to save persistentvolume "pvc-60217037-59a2-4af5-8713-657e81cbfb2a"
	I0127 14:23:23.501924       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"60217037-59a2-4af5-8713-657e81cbfb2a", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0127 14:23:23.524012       1 volume_store.go:219] persistentvolume "pvc-60217037-59a2-4af5-8713-657e81cbfb2a" saved
	I0127 14:23:23.524420       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"60217037-59a2-4af5-8713-657e81cbfb2a", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-60217037-59a2-4af5-8713-657e81cbfb2a
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-354053 -n functional-354053
helpers_test.go:261: (dbg) Run:  kubectl --context functional-354053 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-g4cbx sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-354053 describe pod busybox-mount mysql-58ccfd96bb-g4cbx sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-354053 describe pod busybox-mount mysql-58ccfd96bb-g4cbx sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-354053/192.168.39.247
	Start Time:       Mon, 27 Jan 2025 14:23:29 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://ebd4e409d6eefdec47b888058c980cf8fb0cc0aa26f8edc81c5770ee098c8ac4
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 27 Jan 2025 14:23:56 +0000
	      Finished:     Mon, 27 Jan 2025 14:23:56 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p8h8w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-p8h8w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  10m    default-scheduler  Successfully assigned default/busybox-mount to functional-354053
	  Normal  Pulling    10m    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m36s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.205s (25.643s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m36s  kubelet            Created container: mount-munger
	  Normal  Started    9m36s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-g4cbx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-354053/192.168.39.247
	Start Time:       Mon, 27 Jan 2025 14:23:29 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lxf6h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lxf6h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-58ccfd96bb-g4cbx to functional-354053
	  Warning  Failed     5m4s (x2 over 8m35s)   kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m40s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m41s (x5 over 8m35s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m41s (x3 over 7m26s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     68s (x16 over 8m35s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4s (x21 over 8m35s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-354053/192.168.39.247
	Start Time:       Mon, 27 Jan 2025 14:23:23 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9t22r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-9t22r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/sp-pod to functional-354053
	  Warning  Failed     5m38s (x2 over 7m57s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m16s (x5 over 10m)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m12s (x3 over 9m38s)  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m12s (x5 over 9m38s)  kubelet            Error: ErrImagePull
	  Warning  Failed     99s (x16 over 9m37s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    33s (x21 over 9m37s)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (603.07s)

                                                
                                    
x
+
TestPreload (172.35s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-400198 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-400198 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m35.770532721s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-400198 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-400198 image pull gcr.io/k8s-minikube/busybox: (1.332193056s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-400198
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-400198: (7.298610334s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-400198 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0127 15:18:16.947405 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:19:06.239067 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-400198 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m4.755287074s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-400198 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2025-01-27 15:19:11.268710895 +0000 UTC m=+4422.941505048
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-400198 -n test-preload-400198
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-400198 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-400198 logs -n 25: (1.138454238s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-779469 ssh -n                                                                 | multinode-779469     | jenkins | v1.35.0 | 27 Jan 25 15:04 UTC | 27 Jan 25 15:04 UTC |
	|         | multinode-779469-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-779469 ssh -n multinode-779469 sudo cat                                       | multinode-779469     | jenkins | v1.35.0 | 27 Jan 25 15:04 UTC | 27 Jan 25 15:04 UTC |
	|         | /home/docker/cp-test_multinode-779469-m03_multinode-779469.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-779469 cp multinode-779469-m03:/home/docker/cp-test.txt                       | multinode-779469     | jenkins | v1.35.0 | 27 Jan 25 15:04 UTC | 27 Jan 25 15:04 UTC |
	|         | multinode-779469-m02:/home/docker/cp-test_multinode-779469-m03_multinode-779469-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-779469 ssh -n                                                                 | multinode-779469     | jenkins | v1.35.0 | 27 Jan 25 15:04 UTC | 27 Jan 25 15:04 UTC |
	|         | multinode-779469-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-779469 ssh -n multinode-779469-m02 sudo cat                                   | multinode-779469     | jenkins | v1.35.0 | 27 Jan 25 15:04 UTC | 27 Jan 25 15:04 UTC |
	|         | /home/docker/cp-test_multinode-779469-m03_multinode-779469-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-779469 node stop m03                                                          | multinode-779469     | jenkins | v1.35.0 | 27 Jan 25 15:04 UTC | 27 Jan 25 15:04 UTC |
	| node    | multinode-779469 node start                                                             | multinode-779469     | jenkins | v1.35.0 | 27 Jan 25 15:04 UTC | 27 Jan 25 15:05 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-779469                                                                | multinode-779469     | jenkins | v1.35.0 | 27 Jan 25 15:05 UTC |                     |
	| stop    | -p multinode-779469                                                                     | multinode-779469     | jenkins | v1.35.0 | 27 Jan 25 15:05 UTC | 27 Jan 25 15:08 UTC |
	| start   | -p multinode-779469                                                                     | multinode-779469     | jenkins | v1.35.0 | 27 Jan 25 15:08 UTC | 27 Jan 25 15:10 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-779469                                                                | multinode-779469     | jenkins | v1.35.0 | 27 Jan 25 15:10 UTC |                     |
	| node    | multinode-779469 node delete                                                            | multinode-779469     | jenkins | v1.35.0 | 27 Jan 25 15:10 UTC | 27 Jan 25 15:10 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-779469 stop                                                                   | multinode-779469     | jenkins | v1.35.0 | 27 Jan 25 15:10 UTC | 27 Jan 25 15:13 UTC |
	| start   | -p multinode-779469                                                                     | multinode-779469     | jenkins | v1.35.0 | 27 Jan 25 15:13 UTC | 27 Jan 25 15:15 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-779469                                                                | multinode-779469     | jenkins | v1.35.0 | 27 Jan 25 15:15 UTC |                     |
	| start   | -p multinode-779469-m02                                                                 | multinode-779469-m02 | jenkins | v1.35.0 | 27 Jan 25 15:15 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-779469-m03                                                                 | multinode-779469-m03 | jenkins | v1.35.0 | 27 Jan 25 15:15 UTC | 27 Jan 25 15:16 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-779469                                                                 | multinode-779469     | jenkins | v1.35.0 | 27 Jan 25 15:16 UTC |                     |
	| delete  | -p multinode-779469-m03                                                                 | multinode-779469-m03 | jenkins | v1.35.0 | 27 Jan 25 15:16 UTC | 27 Jan 25 15:16 UTC |
	| delete  | -p multinode-779469                                                                     | multinode-779469     | jenkins | v1.35.0 | 27 Jan 25 15:16 UTC | 27 Jan 25 15:16 UTC |
	| start   | -p test-preload-400198                                                                  | test-preload-400198  | jenkins | v1.35.0 | 27 Jan 25 15:16 UTC | 27 Jan 25 15:17 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-400198 image pull                                                          | test-preload-400198  | jenkins | v1.35.0 | 27 Jan 25 15:17 UTC | 27 Jan 25 15:17 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-400198                                                                  | test-preload-400198  | jenkins | v1.35.0 | 27 Jan 25 15:17 UTC | 27 Jan 25 15:18 UTC |
	| start   | -p test-preload-400198                                                                  | test-preload-400198  | jenkins | v1.35.0 | 27 Jan 25 15:18 UTC | 27 Jan 25 15:19 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-400198 image list                                                          | test-preload-400198  | jenkins | v1.35.0 | 27 Jan 25 15:19 UTC | 27 Jan 25 15:19 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 15:18:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 15:18:06.334637 1050991 out.go:345] Setting OutFile to fd 1 ...
	I0127 15:18:06.334750 1050991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:18:06.334755 1050991 out.go:358] Setting ErrFile to fd 2...
	I0127 15:18:06.334760 1050991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:18:06.334932 1050991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 15:18:06.335498 1050991 out.go:352] Setting JSON to false
	I0127 15:18:06.336473 1050991 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":21633,"bootTime":1737969453,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 15:18:06.336598 1050991 start.go:139] virtualization: kvm guest
	I0127 15:18:06.338709 1050991 out.go:177] * [test-preload-400198] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 15:18:06.340032 1050991 notify.go:220] Checking for updates...
	I0127 15:18:06.340045 1050991 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 15:18:06.341468 1050991 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 15:18:06.342923 1050991 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:18:06.344272 1050991 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:18:06.345606 1050991 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 15:18:06.346934 1050991 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 15:18:06.348637 1050991 config.go:182] Loaded profile config "test-preload-400198": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 15:18:06.348985 1050991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:18:06.349058 1050991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:18:06.364508 1050991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34485
	I0127 15:18:06.365036 1050991 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:18:06.365630 1050991 main.go:141] libmachine: Using API Version  1
	I0127 15:18:06.365666 1050991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:18:06.366000 1050991 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:18:06.366193 1050991 main.go:141] libmachine: (test-preload-400198) Calling .DriverName
	I0127 15:18:06.368149 1050991 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 15:18:06.369438 1050991 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 15:18:06.369731 1050991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:18:06.369768 1050991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:18:06.384418 1050991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40049
	I0127 15:18:06.384959 1050991 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:18:06.385610 1050991 main.go:141] libmachine: Using API Version  1
	I0127 15:18:06.385644 1050991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:18:06.386079 1050991 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:18:06.386297 1050991 main.go:141] libmachine: (test-preload-400198) Calling .DriverName
	I0127 15:18:06.422126 1050991 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 15:18:06.423330 1050991 start.go:297] selected driver: kvm2
	I0127 15:18:06.423344 1050991 start.go:901] validating driver "kvm2" against &{Name:test-preload-400198 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-400198
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:18:06.423484 1050991 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 15:18:06.424162 1050991 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:18:06.424245 1050991 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 15:18:06.439497 1050991 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 15:18:06.439860 1050991 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 15:18:06.439901 1050991 cni.go:84] Creating CNI manager for ""
	I0127 15:18:06.439953 1050991 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:18:06.440013 1050991 start.go:340] cluster config:
	{Name:test-preload-400198 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-400198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:18:06.440143 1050991 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:18:06.442674 1050991 out.go:177] * Starting "test-preload-400198" primary control-plane node in "test-preload-400198" cluster
	I0127 15:18:06.443989 1050991 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 15:18:06.474924 1050991 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0127 15:18:06.474960 1050991 cache.go:56] Caching tarball of preloaded images
	I0127 15:18:06.475158 1050991 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 15:18:06.477025 1050991 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0127 15:18:06.478371 1050991 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0127 15:18:06.527128 1050991 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0127 15:18:11.778263 1050991 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0127 15:18:11.778387 1050991 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0127 15:18:12.649738 1050991 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0127 15:18:12.649885 1050991 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/test-preload-400198/config.json ...
	I0127 15:18:12.650136 1050991 start.go:360] acquireMachinesLock for test-preload-400198: {Name:mk884e36253ca066a698970989f20649e5f9cbef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 15:18:12.650204 1050991 start.go:364] duration metric: took 47.666µs to acquireMachinesLock for "test-preload-400198"
	I0127 15:18:12.650218 1050991 start.go:96] Skipping create...Using existing machine configuration
	I0127 15:18:12.650227 1050991 fix.go:54] fixHost starting: 
	I0127 15:18:12.650501 1050991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:18:12.650539 1050991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:18:12.665781 1050991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38609
	I0127 15:18:12.666317 1050991 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:18:12.666822 1050991 main.go:141] libmachine: Using API Version  1
	I0127 15:18:12.666844 1050991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:18:12.667250 1050991 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:18:12.667465 1050991 main.go:141] libmachine: (test-preload-400198) Calling .DriverName
	I0127 15:18:12.667620 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetState
	I0127 15:18:12.669295 1050991 fix.go:112] recreateIfNeeded on test-preload-400198: state=Stopped err=<nil>
	I0127 15:18:12.669327 1050991 main.go:141] libmachine: (test-preload-400198) Calling .DriverName
	W0127 15:18:12.669486 1050991 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 15:18:12.672429 1050991 out.go:177] * Restarting existing kvm2 VM for "test-preload-400198" ...
	I0127 15:18:12.673706 1050991 main.go:141] libmachine: (test-preload-400198) Calling .Start
	I0127 15:18:12.673956 1050991 main.go:141] libmachine: (test-preload-400198) starting domain...
	I0127 15:18:12.673971 1050991 main.go:141] libmachine: (test-preload-400198) ensuring networks are active...
	I0127 15:18:12.674796 1050991 main.go:141] libmachine: (test-preload-400198) Ensuring network default is active
	I0127 15:18:12.675151 1050991 main.go:141] libmachine: (test-preload-400198) Ensuring network mk-test-preload-400198 is active
	I0127 15:18:12.675634 1050991 main.go:141] libmachine: (test-preload-400198) getting domain XML...
	I0127 15:18:12.676572 1050991 main.go:141] libmachine: (test-preload-400198) creating domain...
	I0127 15:18:13.890372 1050991 main.go:141] libmachine: (test-preload-400198) waiting for IP...
	I0127 15:18:13.891350 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:13.891747 1050991 main.go:141] libmachine: (test-preload-400198) DBG | unable to find current IP address of domain test-preload-400198 in network mk-test-preload-400198
	I0127 15:18:13.891847 1050991 main.go:141] libmachine: (test-preload-400198) DBG | I0127 15:18:13.891735 1051041 retry.go:31] will retry after 200.67349ms: waiting for domain to come up
	I0127 15:18:14.094288 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:14.094748 1050991 main.go:141] libmachine: (test-preload-400198) DBG | unable to find current IP address of domain test-preload-400198 in network mk-test-preload-400198
	I0127 15:18:14.094775 1050991 main.go:141] libmachine: (test-preload-400198) DBG | I0127 15:18:14.094712 1051041 retry.go:31] will retry after 297.962307ms: waiting for domain to come up
	I0127 15:18:14.394259 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:14.394648 1050991 main.go:141] libmachine: (test-preload-400198) DBG | unable to find current IP address of domain test-preload-400198 in network mk-test-preload-400198
	I0127 15:18:14.394684 1050991 main.go:141] libmachine: (test-preload-400198) DBG | I0127 15:18:14.394622 1051041 retry.go:31] will retry after 310.228705ms: waiting for domain to come up
	I0127 15:18:14.706124 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:14.706473 1050991 main.go:141] libmachine: (test-preload-400198) DBG | unable to find current IP address of domain test-preload-400198 in network mk-test-preload-400198
	I0127 15:18:14.706496 1050991 main.go:141] libmachine: (test-preload-400198) DBG | I0127 15:18:14.706458 1051041 retry.go:31] will retry after 433.281318ms: waiting for domain to come up
	I0127 15:18:15.141058 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:15.141679 1050991 main.go:141] libmachine: (test-preload-400198) DBG | unable to find current IP address of domain test-preload-400198 in network mk-test-preload-400198
	I0127 15:18:15.141712 1050991 main.go:141] libmachine: (test-preload-400198) DBG | I0127 15:18:15.141621 1051041 retry.go:31] will retry after 496.424632ms: waiting for domain to come up
	I0127 15:18:15.639367 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:15.639891 1050991 main.go:141] libmachine: (test-preload-400198) DBG | unable to find current IP address of domain test-preload-400198 in network mk-test-preload-400198
	I0127 15:18:15.639924 1050991 main.go:141] libmachine: (test-preload-400198) DBG | I0127 15:18:15.639847 1051041 retry.go:31] will retry after 786.932223ms: waiting for domain to come up
	I0127 15:18:16.429021 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:16.429457 1050991 main.go:141] libmachine: (test-preload-400198) DBG | unable to find current IP address of domain test-preload-400198 in network mk-test-preload-400198
	I0127 15:18:16.429483 1050991 main.go:141] libmachine: (test-preload-400198) DBG | I0127 15:18:16.429433 1051041 retry.go:31] will retry after 1.050038503s: waiting for domain to come up
	I0127 15:18:17.481155 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:17.481537 1050991 main.go:141] libmachine: (test-preload-400198) DBG | unable to find current IP address of domain test-preload-400198 in network mk-test-preload-400198
	I0127 15:18:17.481569 1050991 main.go:141] libmachine: (test-preload-400198) DBG | I0127 15:18:17.481511 1051041 retry.go:31] will retry after 1.141426795s: waiting for domain to come up
	I0127 15:18:18.624519 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:18.624905 1050991 main.go:141] libmachine: (test-preload-400198) DBG | unable to find current IP address of domain test-preload-400198 in network mk-test-preload-400198
	I0127 15:18:18.624977 1050991 main.go:141] libmachine: (test-preload-400198) DBG | I0127 15:18:18.624908 1051041 retry.go:31] will retry after 1.308546148s: waiting for domain to come up
	I0127 15:18:19.935259 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:19.935646 1050991 main.go:141] libmachine: (test-preload-400198) DBG | unable to find current IP address of domain test-preload-400198 in network mk-test-preload-400198
	I0127 15:18:19.935674 1050991 main.go:141] libmachine: (test-preload-400198) DBG | I0127 15:18:19.935629 1051041 retry.go:31] will retry after 1.409711439s: waiting for domain to come up
	I0127 15:18:21.346629 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:21.347254 1050991 main.go:141] libmachine: (test-preload-400198) DBG | unable to find current IP address of domain test-preload-400198 in network mk-test-preload-400198
	I0127 15:18:21.347299 1050991 main.go:141] libmachine: (test-preload-400198) DBG | I0127 15:18:21.347216 1051041 retry.go:31] will retry after 2.096594466s: waiting for domain to come up
	I0127 15:18:23.445277 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:23.445803 1050991 main.go:141] libmachine: (test-preload-400198) DBG | unable to find current IP address of domain test-preload-400198 in network mk-test-preload-400198
	I0127 15:18:23.445832 1050991 main.go:141] libmachine: (test-preload-400198) DBG | I0127 15:18:23.445764 1051041 retry.go:31] will retry after 2.521560537s: waiting for domain to come up
	I0127 15:18:25.969441 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:25.969838 1050991 main.go:141] libmachine: (test-preload-400198) DBG | unable to find current IP address of domain test-preload-400198 in network mk-test-preload-400198
	I0127 15:18:25.969916 1050991 main.go:141] libmachine: (test-preload-400198) DBG | I0127 15:18:25.969800 1051041 retry.go:31] will retry after 3.515676346s: waiting for domain to come up
	I0127 15:18:29.488125 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:29.488471 1050991 main.go:141] libmachine: (test-preload-400198) DBG | unable to find current IP address of domain test-preload-400198 in network mk-test-preload-400198
	I0127 15:18:29.488545 1050991 main.go:141] libmachine: (test-preload-400198) DBG | I0127 15:18:29.488431 1051041 retry.go:31] will retry after 4.7427555s: waiting for domain to come up
	I0127 15:18:34.233558 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:34.233915 1050991 main.go:141] libmachine: (test-preload-400198) found domain IP: 192.168.39.94
	I0127 15:18:34.233937 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has current primary IP address 192.168.39.94 and MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:34.233943 1050991 main.go:141] libmachine: (test-preload-400198) reserving static IP address...
	I0127 15:18:34.234349 1050991 main.go:141] libmachine: (test-preload-400198) reserved static IP address 192.168.39.94 for domain test-preload-400198
	I0127 15:18:34.234388 1050991 main.go:141] libmachine: (test-preload-400198) DBG | found host DHCP lease matching {name: "test-preload-400198", mac: "52:54:00:b5:e2:3b", ip: "192.168.39.94"} in network mk-test-preload-400198: {Iface:virbr1 ExpiryTime:2025-01-27 16:18:24 +0000 UTC Type:0 Mac:52:54:00:b5:e2:3b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-400198 Clientid:01:52:54:00:b5:e2:3b}
	I0127 15:18:34.234402 1050991 main.go:141] libmachine: (test-preload-400198) waiting for SSH...
	I0127 15:18:34.234425 1050991 main.go:141] libmachine: (test-preload-400198) DBG | skip adding static IP to network mk-test-preload-400198 - found existing host DHCP lease matching {name: "test-preload-400198", mac: "52:54:00:b5:e2:3b", ip: "192.168.39.94"}
	I0127 15:18:34.234438 1050991 main.go:141] libmachine: (test-preload-400198) DBG | Getting to WaitForSSH function...
	I0127 15:18:34.236467 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:34.236749 1050991 main.go:141] libmachine: (test-preload-400198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e2:3b", ip: ""} in network mk-test-preload-400198: {Iface:virbr1 ExpiryTime:2025-01-27 16:18:24 +0000 UTC Type:0 Mac:52:54:00:b5:e2:3b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-400198 Clientid:01:52:54:00:b5:e2:3b}
	I0127 15:18:34.236786 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined IP address 192.168.39.94 and MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:34.236870 1050991 main.go:141] libmachine: (test-preload-400198) DBG | Using SSH client type: external
	I0127 15:18:34.236898 1050991 main.go:141] libmachine: (test-preload-400198) DBG | Using SSH private key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/test-preload-400198/id_rsa (-rw-------)
	I0127 15:18:34.236940 1050991 main.go:141] libmachine: (test-preload-400198) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/test-preload-400198/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 15:18:34.236959 1050991 main.go:141] libmachine: (test-preload-400198) DBG | About to run SSH command:
	I0127 15:18:34.236971 1050991 main.go:141] libmachine: (test-preload-400198) DBG | exit 0
	I0127 15:18:34.365177 1050991 main.go:141] libmachine: (test-preload-400198) DBG | SSH cmd err, output: <nil>: 
	I0127 15:18:34.365561 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetConfigRaw
	I0127 15:18:34.366240 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetIP
	I0127 15:18:34.368520 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:34.368908 1050991 main.go:141] libmachine: (test-preload-400198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e2:3b", ip: ""} in network mk-test-preload-400198: {Iface:virbr1 ExpiryTime:2025-01-27 16:18:24 +0000 UTC Type:0 Mac:52:54:00:b5:e2:3b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-400198 Clientid:01:52:54:00:b5:e2:3b}
	I0127 15:18:34.368941 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined IP address 192.168.39.94 and MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:34.369152 1050991 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/test-preload-400198/config.json ...
	I0127 15:18:34.369337 1050991 machine.go:93] provisionDockerMachine start ...
	I0127 15:18:34.369357 1050991 main.go:141] libmachine: (test-preload-400198) Calling .DriverName
	I0127 15:18:34.369573 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHHostname
	I0127 15:18:34.371685 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:34.371957 1050991 main.go:141] libmachine: (test-preload-400198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e2:3b", ip: ""} in network mk-test-preload-400198: {Iface:virbr1 ExpiryTime:2025-01-27 16:18:24 +0000 UTC Type:0 Mac:52:54:00:b5:e2:3b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-400198 Clientid:01:52:54:00:b5:e2:3b}
	I0127 15:18:34.371989 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined IP address 192.168.39.94 and MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:34.372135 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHPort
	I0127 15:18:34.372313 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHKeyPath
	I0127 15:18:34.372449 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHKeyPath
	I0127 15:18:34.372578 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHUsername
	I0127 15:18:34.372693 1050991 main.go:141] libmachine: Using SSH client type: native
	I0127 15:18:34.372885 1050991 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0127 15:18:34.372899 1050991 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 15:18:34.485547 1050991 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 15:18:34.485582 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetMachineName
	I0127 15:18:34.485848 1050991 buildroot.go:166] provisioning hostname "test-preload-400198"
	I0127 15:18:34.485885 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetMachineName
	I0127 15:18:34.486116 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHHostname
	I0127 15:18:34.488629 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:34.488977 1050991 main.go:141] libmachine: (test-preload-400198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e2:3b", ip: ""} in network mk-test-preload-400198: {Iface:virbr1 ExpiryTime:2025-01-27 16:18:24 +0000 UTC Type:0 Mac:52:54:00:b5:e2:3b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-400198 Clientid:01:52:54:00:b5:e2:3b}
	I0127 15:18:34.489029 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined IP address 192.168.39.94 and MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:34.489180 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHPort
	I0127 15:18:34.489379 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHKeyPath
	I0127 15:18:34.489496 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHKeyPath
	I0127 15:18:34.489630 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHUsername
	I0127 15:18:34.489760 1050991 main.go:141] libmachine: Using SSH client type: native
	I0127 15:18:34.489999 1050991 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0127 15:18:34.490017 1050991 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-400198 && echo "test-preload-400198" | sudo tee /etc/hostname
	I0127 15:18:34.615829 1050991 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-400198
	
	I0127 15:18:34.615864 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHHostname
	I0127 15:18:34.618531 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:34.618835 1050991 main.go:141] libmachine: (test-preload-400198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e2:3b", ip: ""} in network mk-test-preload-400198: {Iface:virbr1 ExpiryTime:2025-01-27 16:18:24 +0000 UTC Type:0 Mac:52:54:00:b5:e2:3b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-400198 Clientid:01:52:54:00:b5:e2:3b}
	I0127 15:18:34.618868 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined IP address 192.168.39.94 and MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:34.619013 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHPort
	I0127 15:18:34.619243 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHKeyPath
	I0127 15:18:34.619428 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHKeyPath
	I0127 15:18:34.619553 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHUsername
	I0127 15:18:34.619719 1050991 main.go:141] libmachine: Using SSH client type: native
	I0127 15:18:34.619906 1050991 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0127 15:18:34.619926 1050991 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-400198' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-400198/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-400198' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 15:18:34.738436 1050991 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 15:18:34.738527 1050991 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20321-1005652/.minikube CaCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20321-1005652/.minikube}
	I0127 15:18:34.738561 1050991 buildroot.go:174] setting up certificates
	I0127 15:18:34.738571 1050991 provision.go:84] configureAuth start
	I0127 15:18:34.738582 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetMachineName
	I0127 15:18:34.738910 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetIP
	I0127 15:18:34.741843 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:34.742235 1050991 main.go:141] libmachine: (test-preload-400198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e2:3b", ip: ""} in network mk-test-preload-400198: {Iface:virbr1 ExpiryTime:2025-01-27 16:18:24 +0000 UTC Type:0 Mac:52:54:00:b5:e2:3b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-400198 Clientid:01:52:54:00:b5:e2:3b}
	I0127 15:18:34.742268 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined IP address 192.168.39.94 and MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:34.742422 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHHostname
	I0127 15:18:34.744871 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:34.745265 1050991 main.go:141] libmachine: (test-preload-400198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e2:3b", ip: ""} in network mk-test-preload-400198: {Iface:virbr1 ExpiryTime:2025-01-27 16:18:24 +0000 UTC Type:0 Mac:52:54:00:b5:e2:3b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-400198 Clientid:01:52:54:00:b5:e2:3b}
	I0127 15:18:34.745287 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined IP address 192.168.39.94 and MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:34.745498 1050991 provision.go:143] copyHostCerts
	I0127 15:18:34.745568 1050991 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem, removing ...
	I0127 15:18:34.745590 1050991 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem
	I0127 15:18:34.745658 1050991 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem (1078 bytes)
	I0127 15:18:34.745765 1050991 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem, removing ...
	I0127 15:18:34.745778 1050991 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem
	I0127 15:18:34.745805 1050991 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem (1123 bytes)
	I0127 15:18:34.745861 1050991 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem, removing ...
	I0127 15:18:34.745868 1050991 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem
	I0127 15:18:34.745889 1050991 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem (1679 bytes)
	I0127 15:18:34.745938 1050991 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem org=jenkins.test-preload-400198 san=[127.0.0.1 192.168.39.94 localhost minikube test-preload-400198]
	I0127 15:18:34.855174 1050991 provision.go:177] copyRemoteCerts
	I0127 15:18:34.855234 1050991 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 15:18:34.855264 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHHostname
	I0127 15:18:34.857824 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:34.858214 1050991 main.go:141] libmachine: (test-preload-400198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e2:3b", ip: ""} in network mk-test-preload-400198: {Iface:virbr1 ExpiryTime:2025-01-27 16:18:24 +0000 UTC Type:0 Mac:52:54:00:b5:e2:3b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-400198 Clientid:01:52:54:00:b5:e2:3b}
	I0127 15:18:34.858260 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined IP address 192.168.39.94 and MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:34.858421 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHPort
	I0127 15:18:34.858652 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHKeyPath
	I0127 15:18:34.858800 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHUsername
	I0127 15:18:34.858916 1050991 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/test-preload-400198/id_rsa Username:docker}
	I0127 15:18:34.943749 1050991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 15:18:34.970431 1050991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0127 15:18:34.996543 1050991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 15:18:35.021535 1050991 provision.go:87] duration metric: took 282.949251ms to configureAuth
	I0127 15:18:35.021573 1050991 buildroot.go:189] setting minikube options for container-runtime
	I0127 15:18:35.021784 1050991 config.go:182] Loaded profile config "test-preload-400198": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 15:18:35.021870 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHHostname
	I0127 15:18:35.024672 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:35.025062 1050991 main.go:141] libmachine: (test-preload-400198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e2:3b", ip: ""} in network mk-test-preload-400198: {Iface:virbr1 ExpiryTime:2025-01-27 16:18:24 +0000 UTC Type:0 Mac:52:54:00:b5:e2:3b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-400198 Clientid:01:52:54:00:b5:e2:3b}
	I0127 15:18:35.025099 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined IP address 192.168.39.94 and MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:35.025254 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHPort
	I0127 15:18:35.025447 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHKeyPath
	I0127 15:18:35.025584 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHKeyPath
	I0127 15:18:35.025738 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHUsername
	I0127 15:18:35.025879 1050991 main.go:141] libmachine: Using SSH client type: native
	I0127 15:18:35.026111 1050991 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0127 15:18:35.026130 1050991 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 15:18:35.254597 1050991 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 15:18:35.254632 1050991 machine.go:96] duration metric: took 885.280653ms to provisionDockerMachine
	I0127 15:18:35.254655 1050991 start.go:293] postStartSetup for "test-preload-400198" (driver="kvm2")
	I0127 15:18:35.254671 1050991 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 15:18:35.254698 1050991 main.go:141] libmachine: (test-preload-400198) Calling .DriverName
	I0127 15:18:35.255068 1050991 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 15:18:35.255122 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHHostname
	I0127 15:18:35.257863 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:35.258233 1050991 main.go:141] libmachine: (test-preload-400198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e2:3b", ip: ""} in network mk-test-preload-400198: {Iface:virbr1 ExpiryTime:2025-01-27 16:18:24 +0000 UTC Type:0 Mac:52:54:00:b5:e2:3b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-400198 Clientid:01:52:54:00:b5:e2:3b}
	I0127 15:18:35.258264 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined IP address 192.168.39.94 and MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:35.258397 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHPort
	I0127 15:18:35.258606 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHKeyPath
	I0127 15:18:35.258751 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHUsername
	I0127 15:18:35.258877 1050991 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/test-preload-400198/id_rsa Username:docker}
	I0127 15:18:35.343856 1050991 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 15:18:35.348497 1050991 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 15:18:35.348525 1050991 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/addons for local assets ...
	I0127 15:18:35.348597 1050991 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/files for local assets ...
	I0127 15:18:35.348689 1050991 filesync.go:149] local asset: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem -> 10128162.pem in /etc/ssl/certs
	I0127 15:18:35.348803 1050991 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 15:18:35.358550 1050991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:18:35.382862 1050991 start.go:296] duration metric: took 128.189414ms for postStartSetup
	I0127 15:18:35.382913 1050991 fix.go:56] duration metric: took 22.732684194s for fixHost
	I0127 15:18:35.382941 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHHostname
	I0127 15:18:35.385715 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:35.386028 1050991 main.go:141] libmachine: (test-preload-400198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e2:3b", ip: ""} in network mk-test-preload-400198: {Iface:virbr1 ExpiryTime:2025-01-27 16:18:24 +0000 UTC Type:0 Mac:52:54:00:b5:e2:3b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-400198 Clientid:01:52:54:00:b5:e2:3b}
	I0127 15:18:35.386067 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined IP address 192.168.39.94 and MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:35.386236 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHPort
	I0127 15:18:35.386439 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHKeyPath
	I0127 15:18:35.386609 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHKeyPath
	I0127 15:18:35.386726 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHUsername
	I0127 15:18:35.386881 1050991 main.go:141] libmachine: Using SSH client type: native
	I0127 15:18:35.387108 1050991 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0127 15:18:35.387124 1050991 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 15:18:35.497751 1050991 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737991115.451076809
	
	I0127 15:18:35.497776 1050991 fix.go:216] guest clock: 1737991115.451076809
	I0127 15:18:35.497783 1050991 fix.go:229] Guest: 2025-01-27 15:18:35.451076809 +0000 UTC Remote: 2025-01-27 15:18:35.382918318 +0000 UTC m=+29.089314964 (delta=68.158491ms)
	I0127 15:18:35.497804 1050991 fix.go:200] guest clock delta is within tolerance: 68.158491ms
	I0127 15:18:35.497809 1050991 start.go:83] releasing machines lock for "test-preload-400198", held for 22.847596658s
	I0127 15:18:35.497830 1050991 main.go:141] libmachine: (test-preload-400198) Calling .DriverName
	I0127 15:18:35.498159 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetIP
	I0127 15:18:35.500668 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:35.500962 1050991 main.go:141] libmachine: (test-preload-400198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e2:3b", ip: ""} in network mk-test-preload-400198: {Iface:virbr1 ExpiryTime:2025-01-27 16:18:24 +0000 UTC Type:0 Mac:52:54:00:b5:e2:3b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-400198 Clientid:01:52:54:00:b5:e2:3b}
	I0127 15:18:35.500999 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined IP address 192.168.39.94 and MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:35.501193 1050991 main.go:141] libmachine: (test-preload-400198) Calling .DriverName
	I0127 15:18:35.501651 1050991 main.go:141] libmachine: (test-preload-400198) Calling .DriverName
	I0127 15:18:35.501819 1050991 main.go:141] libmachine: (test-preload-400198) Calling .DriverName
	I0127 15:18:35.501933 1050991 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 15:18:35.501980 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHHostname
	I0127 15:18:35.502075 1050991 ssh_runner.go:195] Run: cat /version.json
	I0127 15:18:35.502099 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHHostname
	I0127 15:18:35.504300 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:35.504619 1050991 main.go:141] libmachine: (test-preload-400198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e2:3b", ip: ""} in network mk-test-preload-400198: {Iface:virbr1 ExpiryTime:2025-01-27 16:18:24 +0000 UTC Type:0 Mac:52:54:00:b5:e2:3b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-400198 Clientid:01:52:54:00:b5:e2:3b}
	I0127 15:18:35.504647 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined IP address 192.168.39.94 and MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:35.504665 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:35.504775 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHPort
	I0127 15:18:35.504935 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHKeyPath
	I0127 15:18:35.505033 1050991 main.go:141] libmachine: (test-preload-400198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e2:3b", ip: ""} in network mk-test-preload-400198: {Iface:virbr1 ExpiryTime:2025-01-27 16:18:24 +0000 UTC Type:0 Mac:52:54:00:b5:e2:3b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-400198 Clientid:01:52:54:00:b5:e2:3b}
	I0127 15:18:35.505065 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined IP address 192.168.39.94 and MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:35.505136 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHUsername
	I0127 15:18:35.505221 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHPort
	I0127 15:18:35.505296 1050991 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/test-preload-400198/id_rsa Username:docker}
	I0127 15:18:35.505385 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHKeyPath
	I0127 15:18:35.505534 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHUsername
	I0127 15:18:35.505672 1050991 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/test-preload-400198/id_rsa Username:docker}
	I0127 15:18:35.586214 1050991 ssh_runner.go:195] Run: systemctl --version
	I0127 15:18:35.615147 1050991 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 15:18:35.759713 1050991 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 15:18:35.766179 1050991 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 15:18:35.766264 1050991 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 15:18:35.782702 1050991 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 15:18:35.782734 1050991 start.go:495] detecting cgroup driver to use...
	I0127 15:18:35.782818 1050991 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 15:18:35.799898 1050991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 15:18:35.814627 1050991 docker.go:217] disabling cri-docker service (if available) ...
	I0127 15:18:35.814687 1050991 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 15:18:35.828502 1050991 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 15:18:35.842491 1050991 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 15:18:35.952439 1050991 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 15:18:36.110483 1050991 docker.go:233] disabling docker service ...
	I0127 15:18:36.110574 1050991 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 15:18:36.125995 1050991 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 15:18:36.139830 1050991 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 15:18:36.258344 1050991 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 15:18:36.375159 1050991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 15:18:36.391107 1050991 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 15:18:36.410795 1050991 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0127 15:18:36.410859 1050991 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:18:36.422172 1050991 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 15:18:36.422257 1050991 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:18:36.433623 1050991 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:18:36.444620 1050991 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:18:36.455998 1050991 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 15:18:36.467528 1050991 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:18:36.478672 1050991 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:18:36.496345 1050991 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:18:36.507843 1050991 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 15:18:36.518195 1050991 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 15:18:36.518267 1050991 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 15:18:36.531382 1050991 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 15:18:36.542115 1050991 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:18:36.670847 1050991 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 15:18:36.762222 1050991 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 15:18:36.762320 1050991 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 15:18:36.769430 1050991 start.go:563] Will wait 60s for crictl version
	I0127 15:18:36.769515 1050991 ssh_runner.go:195] Run: which crictl
	I0127 15:18:36.773628 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 15:18:36.812689 1050991 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 15:18:36.812798 1050991 ssh_runner.go:195] Run: crio --version
	I0127 15:18:36.842559 1050991 ssh_runner.go:195] Run: crio --version
	I0127 15:18:36.872459 1050991 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0127 15:18:36.873896 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetIP
	I0127 15:18:36.876388 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:36.876672 1050991 main.go:141] libmachine: (test-preload-400198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e2:3b", ip: ""} in network mk-test-preload-400198: {Iface:virbr1 ExpiryTime:2025-01-27 16:18:24 +0000 UTC Type:0 Mac:52:54:00:b5:e2:3b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-400198 Clientid:01:52:54:00:b5:e2:3b}
	I0127 15:18:36.876697 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined IP address 192.168.39.94 and MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:18:36.876881 1050991 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 15:18:36.881428 1050991 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:18:36.895084 1050991 kubeadm.go:883] updating cluster {Name:test-preload-400198 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-400198 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 15:18:36.895243 1050991 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 15:18:36.895316 1050991 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:18:36.934505 1050991 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0127 15:18:36.934576 1050991 ssh_runner.go:195] Run: which lz4
	I0127 15:18:36.938840 1050991 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 15:18:36.943509 1050991 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 15:18:36.943541 1050991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0127 15:18:38.565847 1050991 crio.go:462] duration metric: took 1.627045851s to copy over tarball
	I0127 15:18:38.565944 1050991 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 15:18:40.950847 1050991 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.384861972s)
	I0127 15:18:40.950885 1050991 crio.go:469] duration metric: took 2.384995607s to extract the tarball
	I0127 15:18:40.950896 1050991 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 15:18:40.993598 1050991 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:18:41.037946 1050991 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0127 15:18:41.037977 1050991 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 15:18:41.038053 1050991 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:18:41.038074 1050991 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 15:18:41.038073 1050991 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 15:18:41.038088 1050991 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0127 15:18:41.038170 1050991 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 15:18:41.038185 1050991 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0127 15:18:41.038201 1050991 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 15:18:41.038226 1050991 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 15:18:41.039655 1050991 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 15:18:41.039663 1050991 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 15:18:41.039681 1050991 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0127 15:18:41.039680 1050991 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 15:18:41.039655 1050991 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0127 15:18:41.039656 1050991 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:18:41.039656 1050991 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 15:18:41.039655 1050991 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 15:18:41.213066 1050991 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0127 15:18:41.221857 1050991 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0127 15:18:41.232402 1050991 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0127 15:18:41.248398 1050991 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0127 15:18:41.255835 1050991 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 15:18:41.273152 1050991 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0127 15:18:41.289054 1050991 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0127 15:18:41.289102 1050991 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 15:18:41.289152 1050991 ssh_runner.go:195] Run: which crictl
	I0127 15:18:41.296826 1050991 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0127 15:18:41.305574 1050991 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0127 15:18:41.305620 1050991 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 15:18:41.305667 1050991 ssh_runner.go:195] Run: which crictl
	I0127 15:18:41.375409 1050991 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0127 15:18:41.375453 1050991 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0127 15:18:41.375496 1050991 ssh_runner.go:195] Run: which crictl
	I0127 15:18:41.381446 1050991 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0127 15:18:41.381488 1050991 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 15:18:41.381532 1050991 ssh_runner.go:195] Run: which crictl
	I0127 15:18:41.395037 1050991 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0127 15:18:41.395091 1050991 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 15:18:41.395100 1050991 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0127 15:18:41.395129 1050991 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0127 15:18:41.395146 1050991 ssh_runner.go:195] Run: which crictl
	I0127 15:18:41.395172 1050991 ssh_runner.go:195] Run: which crictl
	I0127 15:18:41.395230 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 15:18:41.428349 1050991 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0127 15:18:41.428392 1050991 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 15:18:41.428437 1050991 ssh_runner.go:195] Run: which crictl
	I0127 15:18:41.428512 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 15:18:41.428576 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 15:18:41.428583 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 15:18:41.428666 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 15:18:41.428699 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 15:18:41.444758 1050991 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:18:41.454947 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 15:18:41.462554 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 15:18:41.621586 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 15:18:41.621598 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 15:18:41.621598 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 15:18:41.621693 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 15:18:41.621758 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 15:18:41.781414 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 15:18:41.781476 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 15:18:41.781515 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 15:18:41.781606 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 15:18:41.781651 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 15:18:41.781736 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 15:18:41.781773 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 15:18:41.897620 1050991 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0127 15:18:41.897740 1050991 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 15:18:41.938930 1050991 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 15:18:41.938965 1050991 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0127 15:18:41.939009 1050991 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0127 15:18:41.939101 1050991 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 15:18:41.939103 1050991 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0127 15:18:41.939164 1050991 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0127 15:18:41.939168 1050991 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0127 15:18:41.939233 1050991 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0127 15:18:41.939244 1050991 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0127 15:18:41.939278 1050991 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 15:18:41.939320 1050991 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0127 15:18:41.981574 1050991 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0127 15:18:41.981609 1050991 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 15:18:41.981629 1050991 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0127 15:18:41.981653 1050991 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0127 15:18:41.981667 1050991 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0127 15:18:41.981667 1050991 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 15:18:41.981701 1050991 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0127 15:18:41.981713 1050991 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0127 15:18:41.981772 1050991 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0127 15:18:41.981817 1050991 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 15:18:45.252892 1050991 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.271183644s)
	I0127 15:18:45.252915 1050991 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (3.271075957s)
	I0127 15:18:45.252938 1050991 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0127 15:18:45.252947 1050991 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0127 15:18:45.252955 1050991 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 15:18:45.253024 1050991 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 15:18:46.099184 1050991 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0127 15:18:46.099222 1050991 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0127 15:18:46.099284 1050991 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0127 15:18:48.253776 1050991 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.154443615s)
	I0127 15:18:48.253815 1050991 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0127 15:18:48.253829 1050991 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 15:18:48.253887 1050991 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 15:18:48.699626 1050991 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0127 15:18:48.699664 1050991 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0127 15:18:48.699726 1050991 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0127 15:18:49.047716 1050991 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0127 15:18:49.047750 1050991 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 15:18:49.047807 1050991 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 15:18:49.799771 1050991 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0127 15:18:49.799819 1050991 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0127 15:18:49.799880 1050991 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0127 15:18:49.945837 1050991 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0127 15:18:49.945889 1050991 cache_images.go:123] Successfully loaded all cached images
	I0127 15:18:49.945898 1050991 cache_images.go:92] duration metric: took 8.907902846s to LoadCachedImages
	I0127 15:18:49.945914 1050991 kubeadm.go:934] updating node { 192.168.39.94 8443 v1.24.4 crio true true} ...
	I0127 15:18:49.946065 1050991 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-400198 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-400198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 15:18:49.946155 1050991 ssh_runner.go:195] Run: crio config
	I0127 15:18:50.000874 1050991 cni.go:84] Creating CNI manager for ""
	I0127 15:18:50.000896 1050991 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:18:50.000906 1050991 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 15:18:50.000926 1050991 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.94 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-400198 NodeName:test-preload-400198 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 15:18:50.001125 1050991 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-400198"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 15:18:50.001216 1050991 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0127 15:18:50.012016 1050991 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 15:18:50.012119 1050991 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 15:18:50.022079 1050991 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0127 15:18:50.039477 1050991 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 15:18:50.056441 1050991 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0127 15:18:50.074024 1050991 ssh_runner.go:195] Run: grep 192.168.39.94	control-plane.minikube.internal$ /etc/hosts
	I0127 15:18:50.078082 1050991 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:18:50.090615 1050991 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:18:50.219900 1050991 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:18:50.237795 1050991 certs.go:68] Setting up /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/test-preload-400198 for IP: 192.168.39.94
	I0127 15:18:50.237826 1050991 certs.go:194] generating shared ca certs ...
	I0127 15:18:50.237855 1050991 certs.go:226] acquiring lock for ca certs: {Name:mk0f815247e4bef2bde3ddcae95639d1cd9cab24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:18:50.238067 1050991 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key
	I0127 15:18:50.238128 1050991 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key
	I0127 15:18:50.238144 1050991 certs.go:256] generating profile certs ...
	I0127 15:18:50.238278 1050991 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/test-preload-400198/client.key
	I0127 15:18:50.238374 1050991 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/test-preload-400198/apiserver.key.622ee580
	I0127 15:18:50.238432 1050991 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/test-preload-400198/proxy-client.key
	I0127 15:18:50.238614 1050991 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem (1338 bytes)
	W0127 15:18:50.238677 1050991 certs.go:480] ignoring /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816_empty.pem, impossibly tiny 0 bytes
	I0127 15:18:50.238694 1050991 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 15:18:50.238734 1050991 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem (1078 bytes)
	I0127 15:18:50.238766 1050991 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem (1123 bytes)
	I0127 15:18:50.238793 1050991 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem (1679 bytes)
	I0127 15:18:50.238837 1050991 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:18:50.239684 1050991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 15:18:50.276397 1050991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 15:18:50.311119 1050991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 15:18:50.343423 1050991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 15:18:50.372205 1050991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/test-preload-400198/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0127 15:18:50.402506 1050991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/test-preload-400198/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 15:18:50.436913 1050991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/test-preload-400198/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 15:18:50.474623 1050991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/test-preload-400198/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 15:18:50.499028 1050991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem --> /usr/share/ca-certificates/1012816.pem (1338 bytes)
	I0127 15:18:50.523256 1050991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /usr/share/ca-certificates/10128162.pem (1708 bytes)
	I0127 15:18:50.553656 1050991 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 15:18:50.581372 1050991 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 15:18:50.600752 1050991 ssh_runner.go:195] Run: openssl version
	I0127 15:18:50.607232 1050991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1012816.pem && ln -fs /usr/share/ca-certificates/1012816.pem /etc/ssl/certs/1012816.pem"
	I0127 15:18:50.618842 1050991 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1012816.pem
	I0127 15:18:50.623928 1050991 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 14:20 /usr/share/ca-certificates/1012816.pem
	I0127 15:18:50.624000 1050991 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1012816.pem
	I0127 15:18:50.630387 1050991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1012816.pem /etc/ssl/certs/51391683.0"
	I0127 15:18:50.641722 1050991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10128162.pem && ln -fs /usr/share/ca-certificates/10128162.pem /etc/ssl/certs/10128162.pem"
	I0127 15:18:50.653076 1050991 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10128162.pem
	I0127 15:18:50.657867 1050991 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 14:20 /usr/share/ca-certificates/10128162.pem
	I0127 15:18:50.657935 1050991 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10128162.pem
	I0127 15:18:50.663691 1050991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10128162.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 15:18:50.674802 1050991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 15:18:50.685603 1050991 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:18:50.690396 1050991 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 14:06 /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:18:50.690462 1050991 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:18:50.696175 1050991 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 15:18:50.707111 1050991 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 15:18:50.711940 1050991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 15:18:50.718200 1050991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 15:18:50.724420 1050991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 15:18:50.730536 1050991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 15:18:50.736707 1050991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 15:18:50.742702 1050991 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 15:18:50.748721 1050991 kubeadm.go:392] StartCluster: {Name:test-preload-400198 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-400198 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:18:50.748800 1050991 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 15:18:50.748846 1050991 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:18:50.795845 1050991 cri.go:89] found id: ""
	I0127 15:18:50.795923 1050991 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 15:18:50.806311 1050991 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 15:18:50.806329 1050991 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 15:18:50.806373 1050991 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 15:18:50.816151 1050991 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 15:18:50.816640 1050991 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-400198" does not appear in /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:18:50.816760 1050991 kubeconfig.go:62] /home/jenkins/minikube-integration/20321-1005652/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-400198" cluster setting kubeconfig missing "test-preload-400198" context setting]
	I0127 15:18:50.817088 1050991 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:18:50.817727 1050991 kapi.go:59] client config for test-preload-400198: &rest.Config{Host:"https://192.168.39.94:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/test-preload-400198/client.crt", KeyFile:"/home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/test-preload-400198/client.key", CAFile:"/home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]u
int8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 15:18:50.818402 1050991 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 15:18:50.828162 1050991 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.94
	I0127 15:18:50.828196 1050991 kubeadm.go:1160] stopping kube-system containers ...
	I0127 15:18:50.828210 1050991 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 15:18:50.828253 1050991 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:18:50.865262 1050991 cri.go:89] found id: ""
	I0127 15:18:50.865341 1050991 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 15:18:50.881689 1050991 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:18:50.891614 1050991 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:18:50.891652 1050991 kubeadm.go:157] found existing configuration files:
	
	I0127 15:18:50.891717 1050991 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:18:50.901083 1050991 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:18:50.901160 1050991 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:18:50.911111 1050991 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:18:50.920633 1050991 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:18:50.920696 1050991 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:18:50.930215 1050991 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:18:50.939566 1050991 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:18:50.939699 1050991 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:18:50.949131 1050991 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:18:50.958334 1050991 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:18:50.958386 1050991 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:18:50.967857 1050991 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:18:50.977554 1050991 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:18:51.077691 1050991 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:18:51.866671 1050991 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:18:52.165429 1050991 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:18:52.244585 1050991 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:18:52.309663 1050991 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:18:52.309760 1050991 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:18:52.810305 1050991 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:18:53.310102 1050991 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:18:53.379604 1050991 api_server.go:72] duration metric: took 1.069940344s to wait for apiserver process to appear ...
	I0127 15:18:53.379643 1050991 api_server.go:88] waiting for apiserver healthz status ...
	I0127 15:18:53.379667 1050991 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I0127 15:18:53.380212 1050991 api_server.go:269] stopped: https://192.168.39.94:8443/healthz: Get "https://192.168.39.94:8443/healthz": dial tcp 192.168.39.94:8443: connect: connection refused
	I0127 15:18:53.879877 1050991 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I0127 15:18:53.880605 1050991 api_server.go:269] stopped: https://192.168.39.94:8443/healthz: Get "https://192.168.39.94:8443/healthz": dial tcp 192.168.39.94:8443: connect: connection refused
	I0127 15:18:54.380395 1050991 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I0127 15:18:57.565646 1050991 api_server.go:279] https://192.168.39.94:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 15:18:57.565676 1050991 api_server.go:103] status: https://192.168.39.94:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 15:18:57.565692 1050991 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I0127 15:18:57.603848 1050991 api_server.go:279] https://192.168.39.94:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 15:18:57.603877 1050991 api_server.go:103] status: https://192.168.39.94:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 15:18:57.879785 1050991 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I0127 15:18:57.884953 1050991 api_server.go:279] https://192.168.39.94:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 15:18:57.884983 1050991 api_server.go:103] status: https://192.168.39.94:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 15:18:58.380612 1050991 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I0127 15:18:58.387996 1050991 api_server.go:279] https://192.168.39.94:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 15:18:58.388033 1050991 api_server.go:103] status: https://192.168.39.94:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 15:18:58.880604 1050991 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I0127 15:18:58.885834 1050991 api_server.go:279] https://192.168.39.94:8443/healthz returned 200:
	ok
	I0127 15:18:58.892823 1050991 api_server.go:141] control plane version: v1.24.4
	I0127 15:18:58.892855 1050991 api_server.go:131] duration metric: took 5.513204596s to wait for apiserver health ...
	I0127 15:18:58.892865 1050991 cni.go:84] Creating CNI manager for ""
	I0127 15:18:58.892872 1050991 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:18:58.894841 1050991 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 15:18:58.896188 1050991 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 15:18:58.907304 1050991 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 15:18:58.924949 1050991 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 15:18:58.925080 1050991 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0127 15:18:58.925101 1050991 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0127 15:18:58.935760 1050991 system_pods.go:59] 7 kube-system pods found
	I0127 15:18:58.935792 1050991 system_pods.go:61] "coredns-6d4b75cb6d-drdbq" [8a68d285-3b27-4a75-9850-3e9f04bff887] Running
	I0127 15:18:58.935798 1050991 system_pods.go:61] "etcd-test-preload-400198" [030e4456-c0b9-49ab-914d-38c1328715d0] Running
	I0127 15:18:58.935848 1050991 system_pods.go:61] "kube-apiserver-test-preload-400198" [20e9337d-ec47-4fa5-9931-a5130022e0a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 15:18:58.935859 1050991 system_pods.go:61] "kube-controller-manager-test-preload-400198" [7c2fbd16-bd93-46fc-8791-03127847a5dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 15:18:58.935868 1050991 system_pods.go:61] "kube-proxy-786wt" [da6a5f31-9169-474c-a9e3-fed9fbe14d26] Running
	I0127 15:18:58.935876 1050991 system_pods.go:61] "kube-scheduler-test-preload-400198" [9b748cc3-8d31-454f-8cd9-e73e53c1e915] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 15:18:58.935889 1050991 system_pods.go:61] "storage-provisioner" [350e8112-2ff9-40ec-8285-848dd4f5e878] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 15:18:58.935902 1050991 system_pods.go:74] duration metric: took 10.922519ms to wait for pod list to return data ...
	I0127 15:18:58.935913 1050991 node_conditions.go:102] verifying NodePressure condition ...
	I0127 15:18:58.940640 1050991 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 15:18:58.940683 1050991 node_conditions.go:123] node cpu capacity is 2
	I0127 15:18:58.940699 1050991 node_conditions.go:105] duration metric: took 4.78079ms to run NodePressure ...
	I0127 15:18:58.940729 1050991 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:18:59.227320 1050991 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 15:18:59.231884 1050991 kubeadm.go:739] kubelet initialised
	I0127 15:18:59.231914 1050991 kubeadm.go:740] duration metric: took 4.537955ms waiting for restarted kubelet to initialise ...
	I0127 15:18:59.231926 1050991 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:18:59.238997 1050991 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-drdbq" in "kube-system" namespace to be "Ready" ...
	I0127 15:18:59.246034 1050991 pod_ready.go:98] node "test-preload-400198" hosting pod "coredns-6d4b75cb6d-drdbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-400198" has status "Ready":"False"
	I0127 15:18:59.246064 1050991 pod_ready.go:82] duration metric: took 7.022346ms for pod "coredns-6d4b75cb6d-drdbq" in "kube-system" namespace to be "Ready" ...
	E0127 15:18:59.246078 1050991 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-400198" hosting pod "coredns-6d4b75cb6d-drdbq" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-400198" has status "Ready":"False"
	I0127 15:18:59.246089 1050991 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-400198" in "kube-system" namespace to be "Ready" ...
	I0127 15:18:59.253142 1050991 pod_ready.go:98] node "test-preload-400198" hosting pod "etcd-test-preload-400198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-400198" has status "Ready":"False"
	I0127 15:18:59.253166 1050991 pod_ready.go:82] duration metric: took 7.065505ms for pod "etcd-test-preload-400198" in "kube-system" namespace to be "Ready" ...
	E0127 15:18:59.253175 1050991 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-400198" hosting pod "etcd-test-preload-400198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-400198" has status "Ready":"False"
	I0127 15:18:59.253182 1050991 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-400198" in "kube-system" namespace to be "Ready" ...
	I0127 15:18:59.258370 1050991 pod_ready.go:98] node "test-preload-400198" hosting pod "kube-apiserver-test-preload-400198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-400198" has status "Ready":"False"
	I0127 15:18:59.258399 1050991 pod_ready.go:82] duration metric: took 5.206615ms for pod "kube-apiserver-test-preload-400198" in "kube-system" namespace to be "Ready" ...
	E0127 15:18:59.258417 1050991 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-400198" hosting pod "kube-apiserver-test-preload-400198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-400198" has status "Ready":"False"
	I0127 15:18:59.258424 1050991 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-400198" in "kube-system" namespace to be "Ready" ...
	I0127 15:18:59.329112 1050991 pod_ready.go:98] node "test-preload-400198" hosting pod "kube-controller-manager-test-preload-400198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-400198" has status "Ready":"False"
	I0127 15:18:59.329144 1050991 pod_ready.go:82] duration metric: took 70.708457ms for pod "kube-controller-manager-test-preload-400198" in "kube-system" namespace to be "Ready" ...
	E0127 15:18:59.329155 1050991 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-400198" hosting pod "kube-controller-manager-test-preload-400198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-400198" has status "Ready":"False"
	I0127 15:18:59.329162 1050991 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-786wt" in "kube-system" namespace to be "Ready" ...
	I0127 15:18:59.730125 1050991 pod_ready.go:98] node "test-preload-400198" hosting pod "kube-proxy-786wt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-400198" has status "Ready":"False"
	I0127 15:18:59.730154 1050991 pod_ready.go:82] duration metric: took 400.982369ms for pod "kube-proxy-786wt" in "kube-system" namespace to be "Ready" ...
	E0127 15:18:59.730164 1050991 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-400198" hosting pod "kube-proxy-786wt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-400198" has status "Ready":"False"
	I0127 15:18:59.730171 1050991 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-400198" in "kube-system" namespace to be "Ready" ...
	I0127 15:19:00.129813 1050991 pod_ready.go:98] node "test-preload-400198" hosting pod "kube-scheduler-test-preload-400198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-400198" has status "Ready":"False"
	I0127 15:19:00.129856 1050991 pod_ready.go:82] duration metric: took 399.667479ms for pod "kube-scheduler-test-preload-400198" in "kube-system" namespace to be "Ready" ...
	E0127 15:19:00.129870 1050991 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-400198" hosting pod "kube-scheduler-test-preload-400198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-400198" has status "Ready":"False"
	I0127 15:19:00.129891 1050991 pod_ready.go:39] duration metric: took 897.943757ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:19:00.129919 1050991 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 15:19:00.144092 1050991 ops.go:34] apiserver oom_adj: -16
	I0127 15:19:00.144122 1050991 kubeadm.go:597] duration metric: took 9.337785544s to restartPrimaryControlPlane
	I0127 15:19:00.144135 1050991 kubeadm.go:394] duration metric: took 9.395425346s to StartCluster
	I0127 15:19:00.144179 1050991 settings.go:142] acquiring lock: {Name:mk76722a8563186e0a733747d866232f054026c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:19:00.144271 1050991 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:19:00.145161 1050991 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:19:00.145408 1050991 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 15:19:00.145487 1050991 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 15:19:00.145621 1050991 addons.go:69] Setting storage-provisioner=true in profile "test-preload-400198"
	I0127 15:19:00.145642 1050991 addons.go:69] Setting default-storageclass=true in profile "test-preload-400198"
	I0127 15:19:00.145647 1050991 addons.go:238] Setting addon storage-provisioner=true in "test-preload-400198"
	W0127 15:19:00.145658 1050991 addons.go:247] addon storage-provisioner should already be in state true
	I0127 15:19:00.145683 1050991 config.go:182] Loaded profile config "test-preload-400198": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 15:19:00.145705 1050991 host.go:66] Checking if "test-preload-400198" exists ...
	I0127 15:19:00.145660 1050991 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-400198"
	I0127 15:19:00.146178 1050991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:19:00.146232 1050991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:19:00.146178 1050991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:19:00.146345 1050991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:19:00.147166 1050991 out.go:177] * Verifying Kubernetes components...
	I0127 15:19:00.148525 1050991 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:19:00.162094 1050991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42349
	I0127 15:19:00.162101 1050991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32873
	I0127 15:19:00.162651 1050991 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:19:00.162698 1050991 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:19:00.163140 1050991 main.go:141] libmachine: Using API Version  1
	I0127 15:19:00.163167 1050991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:19:00.163266 1050991 main.go:141] libmachine: Using API Version  1
	I0127 15:19:00.163287 1050991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:19:00.163506 1050991 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:19:00.163681 1050991 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:19:00.163853 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetState
	I0127 15:19:00.164071 1050991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:19:00.164117 1050991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:19:00.166440 1050991 kapi.go:59] client config for test-preload-400198: &rest.Config{Host:"https://192.168.39.94:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/test-preload-400198/client.crt", KeyFile:"/home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/test-preload-400198/client.key", CAFile:"/home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]u
int8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 15:19:00.166831 1050991 addons.go:238] Setting addon default-storageclass=true in "test-preload-400198"
	W0127 15:19:00.166851 1050991 addons.go:247] addon default-storageclass should already be in state true
	I0127 15:19:00.166883 1050991 host.go:66] Checking if "test-preload-400198" exists ...
	I0127 15:19:00.167301 1050991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:19:00.167354 1050991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:19:00.180815 1050991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40529
	I0127 15:19:00.181385 1050991 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:19:00.181993 1050991 main.go:141] libmachine: Using API Version  1
	I0127 15:19:00.182020 1050991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:19:00.182049 1050991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35487
	I0127 15:19:00.182418 1050991 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:19:00.182499 1050991 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:19:00.182601 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetState
	I0127 15:19:00.183060 1050991 main.go:141] libmachine: Using API Version  1
	I0127 15:19:00.183095 1050991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:19:00.183455 1050991 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:19:00.184122 1050991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:19:00.184171 1050991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:19:00.184283 1050991 main.go:141] libmachine: (test-preload-400198) Calling .DriverName
	I0127 15:19:00.186585 1050991 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:19:00.188112 1050991 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:19:00.188135 1050991 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 15:19:00.188157 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHHostname
	I0127 15:19:00.191668 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:19:00.192190 1050991 main.go:141] libmachine: (test-preload-400198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e2:3b", ip: ""} in network mk-test-preload-400198: {Iface:virbr1 ExpiryTime:2025-01-27 16:18:24 +0000 UTC Type:0 Mac:52:54:00:b5:e2:3b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-400198 Clientid:01:52:54:00:b5:e2:3b}
	I0127 15:19:00.192224 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined IP address 192.168.39.94 and MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:19:00.192429 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHPort
	I0127 15:19:00.192649 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHKeyPath
	I0127 15:19:00.192803 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHUsername
	I0127 15:19:00.192933 1050991 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/test-preload-400198/id_rsa Username:docker}
	I0127 15:19:00.229094 1050991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38169
	I0127 15:19:00.229568 1050991 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:19:00.230088 1050991 main.go:141] libmachine: Using API Version  1
	I0127 15:19:00.230116 1050991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:19:00.230460 1050991 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:19:00.230688 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetState
	I0127 15:19:00.232429 1050991 main.go:141] libmachine: (test-preload-400198) Calling .DriverName
	I0127 15:19:00.232665 1050991 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 15:19:00.232684 1050991 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 15:19:00.232718 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHHostname
	I0127 15:19:00.235729 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:19:00.236161 1050991 main.go:141] libmachine: (test-preload-400198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:e2:3b", ip: ""} in network mk-test-preload-400198: {Iface:virbr1 ExpiryTime:2025-01-27 16:18:24 +0000 UTC Type:0 Mac:52:54:00:b5:e2:3b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-400198 Clientid:01:52:54:00:b5:e2:3b}
	I0127 15:19:00.236184 1050991 main.go:141] libmachine: (test-preload-400198) DBG | domain test-preload-400198 has defined IP address 192.168.39.94 and MAC address 52:54:00:b5:e2:3b in network mk-test-preload-400198
	I0127 15:19:00.236362 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHPort
	I0127 15:19:00.236557 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHKeyPath
	I0127 15:19:00.236801 1050991 main.go:141] libmachine: (test-preload-400198) Calling .GetSSHUsername
	I0127 15:19:00.236971 1050991 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/test-preload-400198/id_rsa Username:docker}
	I0127 15:19:00.327329 1050991 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:19:00.343792 1050991 node_ready.go:35] waiting up to 6m0s for node "test-preload-400198" to be "Ready" ...
	I0127 15:19:00.442240 1050991 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:19:00.491647 1050991 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 15:19:01.507010 1050991 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.01532605s)
	I0127 15:19:01.507060 1050991 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.064780784s)
	I0127 15:19:01.507086 1050991 main.go:141] libmachine: Making call to close driver server
	I0127 15:19:01.507095 1050991 main.go:141] libmachine: Making call to close driver server
	I0127 15:19:01.507103 1050991 main.go:141] libmachine: (test-preload-400198) Calling .Close
	I0127 15:19:01.507106 1050991 main.go:141] libmachine: (test-preload-400198) Calling .Close
	I0127 15:19:01.507408 1050991 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:19:01.507427 1050991 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:19:01.507436 1050991 main.go:141] libmachine: Making call to close driver server
	I0127 15:19:01.507444 1050991 main.go:141] libmachine: (test-preload-400198) Calling .Close
	I0127 15:19:01.507548 1050991 main.go:141] libmachine: (test-preload-400198) DBG | Closing plugin on server side
	I0127 15:19:01.507561 1050991 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:19:01.507574 1050991 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:19:01.507594 1050991 main.go:141] libmachine: Making call to close driver server
	I0127 15:19:01.507602 1050991 main.go:141] libmachine: (test-preload-400198) Calling .Close
	I0127 15:19:01.507676 1050991 main.go:141] libmachine: (test-preload-400198) DBG | Closing plugin on server side
	I0127 15:19:01.507678 1050991 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:19:01.507692 1050991 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:19:01.507833 1050991 main.go:141] libmachine: (test-preload-400198) DBG | Closing plugin on server side
	I0127 15:19:01.507829 1050991 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:19:01.507851 1050991 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:19:01.515923 1050991 main.go:141] libmachine: Making call to close driver server
	I0127 15:19:01.515942 1050991 main.go:141] libmachine: (test-preload-400198) Calling .Close
	I0127 15:19:01.516182 1050991 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:19:01.516202 1050991 main.go:141] libmachine: (test-preload-400198) DBG | Closing plugin on server side
	I0127 15:19:01.516208 1050991 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:19:01.517986 1050991 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 15:19:01.519131 1050991 addons.go:514] duration metric: took 1.373660486s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 15:19:02.347661 1050991 node_ready.go:53] node "test-preload-400198" has status "Ready":"False"
	I0127 15:19:04.347845 1050991 node_ready.go:53] node "test-preload-400198" has status "Ready":"False"
	I0127 15:19:06.348539 1050991 node_ready.go:53] node "test-preload-400198" has status "Ready":"False"
	I0127 15:19:08.348578 1050991 node_ready.go:49] node "test-preload-400198" has status "Ready":"True"
	I0127 15:19:08.348609 1050991 node_ready.go:38] duration metric: took 8.004769857s for node "test-preload-400198" to be "Ready" ...
	I0127 15:19:08.348623 1050991 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:19:08.353614 1050991 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-drdbq" in "kube-system" namespace to be "Ready" ...
	I0127 15:19:08.358268 1050991 pod_ready.go:93] pod "coredns-6d4b75cb6d-drdbq" in "kube-system" namespace has status "Ready":"True"
	I0127 15:19:08.358290 1050991 pod_ready.go:82] duration metric: took 4.64761ms for pod "coredns-6d4b75cb6d-drdbq" in "kube-system" namespace to be "Ready" ...
	I0127 15:19:08.358314 1050991 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-400198" in "kube-system" namespace to be "Ready" ...
	I0127 15:19:08.362691 1050991 pod_ready.go:93] pod "etcd-test-preload-400198" in "kube-system" namespace has status "Ready":"True"
	I0127 15:19:08.362711 1050991 pod_ready.go:82] duration metric: took 4.389517ms for pod "etcd-test-preload-400198" in "kube-system" namespace to be "Ready" ...
	I0127 15:19:08.362722 1050991 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-400198" in "kube-system" namespace to be "Ready" ...
	I0127 15:19:08.369177 1050991 pod_ready.go:93] pod "kube-apiserver-test-preload-400198" in "kube-system" namespace has status "Ready":"True"
	I0127 15:19:08.369197 1050991 pod_ready.go:82] duration metric: took 6.466982ms for pod "kube-apiserver-test-preload-400198" in "kube-system" namespace to be "Ready" ...
	I0127 15:19:08.369209 1050991 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-400198" in "kube-system" namespace to be "Ready" ...
	I0127 15:19:08.877403 1050991 pod_ready.go:93] pod "kube-controller-manager-test-preload-400198" in "kube-system" namespace has status "Ready":"True"
	I0127 15:19:08.877430 1050991 pod_ready.go:82] duration metric: took 508.212929ms for pod "kube-controller-manager-test-preload-400198" in "kube-system" namespace to be "Ready" ...
	I0127 15:19:08.877443 1050991 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-786wt" in "kube-system" namespace to be "Ready" ...
	I0127 15:19:09.148678 1050991 pod_ready.go:93] pod "kube-proxy-786wt" in "kube-system" namespace has status "Ready":"True"
	I0127 15:19:09.148708 1050991 pod_ready.go:82] duration metric: took 271.257438ms for pod "kube-proxy-786wt" in "kube-system" namespace to be "Ready" ...
	I0127 15:19:09.148723 1050991 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-400198" in "kube-system" namespace to be "Ready" ...
	I0127 15:19:09.948278 1050991 pod_ready.go:93] pod "kube-scheduler-test-preload-400198" in "kube-system" namespace has status "Ready":"True"
	I0127 15:19:09.948301 1050991 pod_ready.go:82] duration metric: took 799.570089ms for pod "kube-scheduler-test-preload-400198" in "kube-system" namespace to be "Ready" ...
	I0127 15:19:09.948311 1050991 pod_ready.go:39] duration metric: took 1.599674273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:19:09.948326 1050991 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:19:09.948377 1050991 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:19:09.963772 1050991 api_server.go:72] duration metric: took 9.818332361s to wait for apiserver process to appear ...
	I0127 15:19:09.963803 1050991 api_server.go:88] waiting for apiserver healthz status ...
	I0127 15:19:09.963822 1050991 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I0127 15:19:09.969435 1050991 api_server.go:279] https://192.168.39.94:8443/healthz returned 200:
	ok
	I0127 15:19:09.970541 1050991 api_server.go:141] control plane version: v1.24.4
	I0127 15:19:09.970563 1050991 api_server.go:131] duration metric: took 6.752824ms to wait for apiserver health ...
	I0127 15:19:09.970573 1050991 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 15:19:10.159225 1050991 system_pods.go:59] 7 kube-system pods found
	I0127 15:19:10.159260 1050991 system_pods.go:61] "coredns-6d4b75cb6d-drdbq" [8a68d285-3b27-4a75-9850-3e9f04bff887] Running
	I0127 15:19:10.159265 1050991 system_pods.go:61] "etcd-test-preload-400198" [030e4456-c0b9-49ab-914d-38c1328715d0] Running
	I0127 15:19:10.159275 1050991 system_pods.go:61] "kube-apiserver-test-preload-400198" [20e9337d-ec47-4fa5-9931-a5130022e0a6] Running
	I0127 15:19:10.159279 1050991 system_pods.go:61] "kube-controller-manager-test-preload-400198" [7c2fbd16-bd93-46fc-8791-03127847a5dc] Running
	I0127 15:19:10.159282 1050991 system_pods.go:61] "kube-proxy-786wt" [da6a5f31-9169-474c-a9e3-fed9fbe14d26] Running
	I0127 15:19:10.159285 1050991 system_pods.go:61] "kube-scheduler-test-preload-400198" [9b748cc3-8d31-454f-8cd9-e73e53c1e915] Running
	I0127 15:19:10.159290 1050991 system_pods.go:61] "storage-provisioner" [350e8112-2ff9-40ec-8285-848dd4f5e878] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 15:19:10.159297 1050991 system_pods.go:74] duration metric: took 188.718503ms to wait for pod list to return data ...
	I0127 15:19:10.159311 1050991 default_sa.go:34] waiting for default service account to be created ...
	I0127 15:19:10.349316 1050991 default_sa.go:45] found service account: "default"
	I0127 15:19:10.349346 1050991 default_sa.go:55] duration metric: took 190.028405ms for default service account to be created ...
	I0127 15:19:10.349358 1050991 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 15:19:10.551303 1050991 system_pods.go:87] 7 kube-system pods found
	I0127 15:19:10.749470 1050991 system_pods.go:105] "coredns-6d4b75cb6d-drdbq" [8a68d285-3b27-4a75-9850-3e9f04bff887] Running
	I0127 15:19:10.749492 1050991 system_pods.go:105] "etcd-test-preload-400198" [030e4456-c0b9-49ab-914d-38c1328715d0] Running
	I0127 15:19:10.749502 1050991 system_pods.go:105] "kube-apiserver-test-preload-400198" [20e9337d-ec47-4fa5-9931-a5130022e0a6] Running
	I0127 15:19:10.749508 1050991 system_pods.go:105] "kube-controller-manager-test-preload-400198" [7c2fbd16-bd93-46fc-8791-03127847a5dc] Running
	I0127 15:19:10.749512 1050991 system_pods.go:105] "kube-proxy-786wt" [da6a5f31-9169-474c-a9e3-fed9fbe14d26] Running
	I0127 15:19:10.749516 1050991 system_pods.go:105] "kube-scheduler-test-preload-400198" [9b748cc3-8d31-454f-8cd9-e73e53c1e915] Running
	I0127 15:19:10.749526 1050991 system_pods.go:105] "storage-provisioner" [350e8112-2ff9-40ec-8285-848dd4f5e878] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 15:19:10.749535 1050991 system_pods.go:147] duration metric: took 400.168934ms to wait for k8s-apps to be running ...
	I0127 15:19:10.749545 1050991 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 15:19:10.749590 1050991 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:19:10.764818 1050991 system_svc.go:56] duration metric: took 15.260447ms WaitForService to wait for kubelet
	I0127 15:19:10.764851 1050991 kubeadm.go:582] duration metric: took 10.6194148s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 15:19:10.764905 1050991 node_conditions.go:102] verifying NodePressure condition ...
	I0127 15:19:10.948930 1050991 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 15:19:10.948958 1050991 node_conditions.go:123] node cpu capacity is 2
	I0127 15:19:10.948971 1050991 node_conditions.go:105] duration metric: took 184.059561ms to run NodePressure ...
	I0127 15:19:10.948995 1050991 start.go:241] waiting for startup goroutines ...
	I0127 15:19:10.949029 1050991 start.go:246] waiting for cluster config update ...
	I0127 15:19:10.949045 1050991 start.go:255] writing updated cluster config ...
	I0127 15:19:10.949377 1050991 ssh_runner.go:195] Run: rm -f paused
	I0127 15:19:11.002999 1050991 start.go:600] kubectl: 1.32.1, cluster: 1.24.4 (minor skew: 8)
	I0127 15:19:11.005160 1050991 out.go:201] 
	W0127 15:19:11.006664 1050991 out.go:270] ! /usr/local/bin/kubectl is version 1.32.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0127 15:19:11.008766 1050991 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0127 15:19:11.009964 1050991 out.go:177] * Done! kubectl is now configured to use "test-preload-400198" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 15:19:11 test-preload-400198 crio[667]: time="2025-01-27 15:19:11.939107455Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991151939082374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dab155d5-923b-4129-855c-82acaa0d2a00 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:19:11 test-preload-400198 crio[667]: time="2025-01-27 15:19:11.939586446Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8132d747-1588-4f5e-8b05-36be8bc7f607 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:19:11 test-preload-400198 crio[667]: time="2025-01-27 15:19:11.939641557Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8132d747-1588-4f5e-8b05-36be8bc7f607 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:19:11 test-preload-400198 crio[667]: time="2025-01-27 15:19:11.939843648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5dfbdc14791b84b851a9c965823d70adadd071511c5dc684deffd4ee6db4e313,PodSandboxId:b49041ab513f788948540b407070880fe46d7c4a0ee98ed8975fc82d73b6b281,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737991146540840222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-drdbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a68d285-3b27-4a75-9850-3e9f04bff887,},Annotations:map[string]string{io.kubernetes.container.hash: 2599a696,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1ba520f136e265d2aba0e922869cc3f7ca89629eb0bd9d821a2ff556108856,PodSandboxId:24aff06740d40de54f44a3c642dce6bf09c47b6f5c72e90000394765e7992afb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737991139660704644,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-786wt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: da6a5f31-9169-474c-a9e3-fed9fbe14d26,},Annotations:map[string]string{io.kubernetes.container.hash: 97f9e31d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42feffe14312c12cc9067b9c04e6e9ec49e8c36149a0a482f482367ebb482de5,PodSandboxId:5285626938a6c44a89b33dd65ae4f6b759222494c2269abf9d9c0506e54fe939,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737991139430928508,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 350
e8112-2ff9-40ec-8285-848dd4f5e878,},Annotations:map[string]string{io.kubernetes.container.hash: d8fa0381,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1bcb1969daf7a00e1b7f1f8eca2105b444d0cabed75cd70b4d07233bdb253f2,PodSandboxId:c737fe252189c2e306476085d1d5f7f01248e7fb9cdcde6beb86168cd8b81bc5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737991133074898909,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-400198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5c3eeca5
4b9d4c809867b0068e23bd,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215b633a0587bb586aef7ed03475c7c6a274bf8047bc6c8da9ff7c1cacdd5253,PodSandboxId:c7cabc746631d4b0d9fc926678fe856c472f9888c4e29e1ecdb46c285ed5a6f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737991133051882066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-400198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66b766cee0dae5619cd018fd0fde307e,},Annotations:map[
string]string{io.kubernetes.container.hash: 223717f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1fa77825f6a4a3d4bb4a22bc1ba66efd4b8f7f168362d96f35d2e982129ed84,PodSandboxId:f6298aaeec657e8a9d6db878bc214772b48ff025f6e1d85469f0ca308e7e552d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737991133001569802,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-400198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b21ca43e92ce1b49bc0e56b495d8ab0,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 66e717d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ac33e8bf486b20d5d7cad9a58a8b1f8478862c059cc06620b0e6fecb9cf3ba,PodSandboxId:a3e64943eac6fd8a466d8317ee4c179e18ce76f9f12fe7675cfa186474affe77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737991132989216438,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-400198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56de9dba6c3f18e26dadb889554c792d,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8132d747-1588-4f5e-8b05-36be8bc7f607 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:19:11 test-preload-400198 crio[667]: time="2025-01-27 15:19:11.983703369Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b26d1bef-7648-4b36-8856-2991aa57255b name=/runtime.v1.RuntimeService/Version
	Jan 27 15:19:11 test-preload-400198 crio[667]: time="2025-01-27 15:19:11.983850129Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b26d1bef-7648-4b36-8856-2991aa57255b name=/runtime.v1.RuntimeService/Version
	Jan 27 15:19:11 test-preload-400198 crio[667]: time="2025-01-27 15:19:11.985259963Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92aa807b-b4ad-4fb7-82c0-f60b03dd84fb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:19:11 test-preload-400198 crio[667]: time="2025-01-27 15:19:11.985740695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991151985716846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92aa807b-b4ad-4fb7-82c0-f60b03dd84fb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:19:11 test-preload-400198 crio[667]: time="2025-01-27 15:19:11.986339048Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be297e24-9859-4c39-90e4-c240ff43dec4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:19:11 test-preload-400198 crio[667]: time="2025-01-27 15:19:11.986405631Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be297e24-9859-4c39-90e4-c240ff43dec4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:19:11 test-preload-400198 crio[667]: time="2025-01-27 15:19:11.986584093Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5dfbdc14791b84b851a9c965823d70adadd071511c5dc684deffd4ee6db4e313,PodSandboxId:b49041ab513f788948540b407070880fe46d7c4a0ee98ed8975fc82d73b6b281,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737991146540840222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-drdbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a68d285-3b27-4a75-9850-3e9f04bff887,},Annotations:map[string]string{io.kubernetes.container.hash: 2599a696,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1ba520f136e265d2aba0e922869cc3f7ca89629eb0bd9d821a2ff556108856,PodSandboxId:24aff06740d40de54f44a3c642dce6bf09c47b6f5c72e90000394765e7992afb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737991139660704644,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-786wt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: da6a5f31-9169-474c-a9e3-fed9fbe14d26,},Annotations:map[string]string{io.kubernetes.container.hash: 97f9e31d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42feffe14312c12cc9067b9c04e6e9ec49e8c36149a0a482f482367ebb482de5,PodSandboxId:5285626938a6c44a89b33dd65ae4f6b759222494c2269abf9d9c0506e54fe939,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737991139430928508,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 350
e8112-2ff9-40ec-8285-848dd4f5e878,},Annotations:map[string]string{io.kubernetes.container.hash: d8fa0381,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1bcb1969daf7a00e1b7f1f8eca2105b444d0cabed75cd70b4d07233bdb253f2,PodSandboxId:c737fe252189c2e306476085d1d5f7f01248e7fb9cdcde6beb86168cd8b81bc5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737991133074898909,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-400198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5c3eeca5
4b9d4c809867b0068e23bd,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215b633a0587bb586aef7ed03475c7c6a274bf8047bc6c8da9ff7c1cacdd5253,PodSandboxId:c7cabc746631d4b0d9fc926678fe856c472f9888c4e29e1ecdb46c285ed5a6f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737991133051882066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-400198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66b766cee0dae5619cd018fd0fde307e,},Annotations:map[
string]string{io.kubernetes.container.hash: 223717f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1fa77825f6a4a3d4bb4a22bc1ba66efd4b8f7f168362d96f35d2e982129ed84,PodSandboxId:f6298aaeec657e8a9d6db878bc214772b48ff025f6e1d85469f0ca308e7e552d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737991133001569802,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-400198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b21ca43e92ce1b49bc0e56b495d8ab0,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 66e717d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ac33e8bf486b20d5d7cad9a58a8b1f8478862c059cc06620b0e6fecb9cf3ba,PodSandboxId:a3e64943eac6fd8a466d8317ee4c179e18ce76f9f12fe7675cfa186474affe77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737991132989216438,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-400198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56de9dba6c3f18e26dadb889554c792d,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be297e24-9859-4c39-90e4-c240ff43dec4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:19:12 test-preload-400198 crio[667]: time="2025-01-27 15:19:12.026268012Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b88cbe8-e394-4a64-974e-5a7cca634797 name=/runtime.v1.RuntimeService/Version
	Jan 27 15:19:12 test-preload-400198 crio[667]: time="2025-01-27 15:19:12.026340423Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b88cbe8-e394-4a64-974e-5a7cca634797 name=/runtime.v1.RuntimeService/Version
	Jan 27 15:19:12 test-preload-400198 crio[667]: time="2025-01-27 15:19:12.027451904Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a2123d3b-b01f-4d94-98dd-3a47529f1bc8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:19:12 test-preload-400198 crio[667]: time="2025-01-27 15:19:12.028127561Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991152028100977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2123d3b-b01f-4d94-98dd-3a47529f1bc8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:19:12 test-preload-400198 crio[667]: time="2025-01-27 15:19:12.029115830Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=099dcdcc-d925-4e38-b34f-21c2da37bd9e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:19:12 test-preload-400198 crio[667]: time="2025-01-27 15:19:12.029170836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=099dcdcc-d925-4e38-b34f-21c2da37bd9e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:19:12 test-preload-400198 crio[667]: time="2025-01-27 15:19:12.029321071Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5dfbdc14791b84b851a9c965823d70adadd071511c5dc684deffd4ee6db4e313,PodSandboxId:b49041ab513f788948540b407070880fe46d7c4a0ee98ed8975fc82d73b6b281,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737991146540840222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-drdbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a68d285-3b27-4a75-9850-3e9f04bff887,},Annotations:map[string]string{io.kubernetes.container.hash: 2599a696,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1ba520f136e265d2aba0e922869cc3f7ca89629eb0bd9d821a2ff556108856,PodSandboxId:24aff06740d40de54f44a3c642dce6bf09c47b6f5c72e90000394765e7992afb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737991139660704644,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-786wt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: da6a5f31-9169-474c-a9e3-fed9fbe14d26,},Annotations:map[string]string{io.kubernetes.container.hash: 97f9e31d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42feffe14312c12cc9067b9c04e6e9ec49e8c36149a0a482f482367ebb482de5,PodSandboxId:5285626938a6c44a89b33dd65ae4f6b759222494c2269abf9d9c0506e54fe939,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737991139430928508,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 350
e8112-2ff9-40ec-8285-848dd4f5e878,},Annotations:map[string]string{io.kubernetes.container.hash: d8fa0381,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1bcb1969daf7a00e1b7f1f8eca2105b444d0cabed75cd70b4d07233bdb253f2,PodSandboxId:c737fe252189c2e306476085d1d5f7f01248e7fb9cdcde6beb86168cd8b81bc5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737991133074898909,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-400198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5c3eeca5
4b9d4c809867b0068e23bd,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215b633a0587bb586aef7ed03475c7c6a274bf8047bc6c8da9ff7c1cacdd5253,PodSandboxId:c7cabc746631d4b0d9fc926678fe856c472f9888c4e29e1ecdb46c285ed5a6f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737991133051882066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-400198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66b766cee0dae5619cd018fd0fde307e,},Annotations:map[
string]string{io.kubernetes.container.hash: 223717f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1fa77825f6a4a3d4bb4a22bc1ba66efd4b8f7f168362d96f35d2e982129ed84,PodSandboxId:f6298aaeec657e8a9d6db878bc214772b48ff025f6e1d85469f0ca308e7e552d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737991133001569802,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-400198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b21ca43e92ce1b49bc0e56b495d8ab0,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 66e717d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ac33e8bf486b20d5d7cad9a58a8b1f8478862c059cc06620b0e6fecb9cf3ba,PodSandboxId:a3e64943eac6fd8a466d8317ee4c179e18ce76f9f12fe7675cfa186474affe77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737991132989216438,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-400198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56de9dba6c3f18e26dadb889554c792d,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=099dcdcc-d925-4e38-b34f-21c2da37bd9e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:19:12 test-preload-400198 crio[667]: time="2025-01-27 15:19:12.064563450Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=64cf4c36-32bd-46c5-8f16-1a6203ff1cb0 name=/runtime.v1.RuntimeService/Version
	Jan 27 15:19:12 test-preload-400198 crio[667]: time="2025-01-27 15:19:12.064644816Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=64cf4c36-32bd-46c5-8f16-1a6203ff1cb0 name=/runtime.v1.RuntimeService/Version
	Jan 27 15:19:12 test-preload-400198 crio[667]: time="2025-01-27 15:19:12.065947052Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a39c53d5-936d-4dad-9ba5-98398c2aa531 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:19:12 test-preload-400198 crio[667]: time="2025-01-27 15:19:12.066359564Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991152066338742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a39c53d5-936d-4dad-9ba5-98398c2aa531 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:19:12 test-preload-400198 crio[667]: time="2025-01-27 15:19:12.067280470Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ae1cde4-fc3c-4ba6-b920-f00c61acc592 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:19:12 test-preload-400198 crio[667]: time="2025-01-27 15:19:12.067334307Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ae1cde4-fc3c-4ba6-b920-f00c61acc592 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:19:12 test-preload-400198 crio[667]: time="2025-01-27 15:19:12.067483758Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5dfbdc14791b84b851a9c965823d70adadd071511c5dc684deffd4ee6db4e313,PodSandboxId:b49041ab513f788948540b407070880fe46d7c4a0ee98ed8975fc82d73b6b281,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737991146540840222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-drdbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a68d285-3b27-4a75-9850-3e9f04bff887,},Annotations:map[string]string{io.kubernetes.container.hash: 2599a696,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1ba520f136e265d2aba0e922869cc3f7ca89629eb0bd9d821a2ff556108856,PodSandboxId:24aff06740d40de54f44a3c642dce6bf09c47b6f5c72e90000394765e7992afb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737991139660704644,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-786wt,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: da6a5f31-9169-474c-a9e3-fed9fbe14d26,},Annotations:map[string]string{io.kubernetes.container.hash: 97f9e31d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42feffe14312c12cc9067b9c04e6e9ec49e8c36149a0a482f482367ebb482de5,PodSandboxId:5285626938a6c44a89b33dd65ae4f6b759222494c2269abf9d9c0506e54fe939,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1737991139430928508,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 350
e8112-2ff9-40ec-8285-848dd4f5e878,},Annotations:map[string]string{io.kubernetes.container.hash: d8fa0381,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1bcb1969daf7a00e1b7f1f8eca2105b444d0cabed75cd70b4d07233bdb253f2,PodSandboxId:c737fe252189c2e306476085d1d5f7f01248e7fb9cdcde6beb86168cd8b81bc5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737991133074898909,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-400198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed5c3eeca5
4b9d4c809867b0068e23bd,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215b633a0587bb586aef7ed03475c7c6a274bf8047bc6c8da9ff7c1cacdd5253,PodSandboxId:c7cabc746631d4b0d9fc926678fe856c472f9888c4e29e1ecdb46c285ed5a6f9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737991133051882066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-400198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66b766cee0dae5619cd018fd0fde307e,},Annotations:map[
string]string{io.kubernetes.container.hash: 223717f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1fa77825f6a4a3d4bb4a22bc1ba66efd4b8f7f168362d96f35d2e982129ed84,PodSandboxId:f6298aaeec657e8a9d6db878bc214772b48ff025f6e1d85469f0ca308e7e552d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737991133001569802,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-400198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b21ca43e92ce1b49bc0e56b495d8ab0,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 66e717d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ac33e8bf486b20d5d7cad9a58a8b1f8478862c059cc06620b0e6fecb9cf3ba,PodSandboxId:a3e64943eac6fd8a466d8317ee4c179e18ce76f9f12fe7675cfa186474affe77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737991132989216438,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-400198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56de9dba6c3f18e26dadb889554c792d,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ae1cde4-fc3c-4ba6-b920-f00c61acc592 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5dfbdc14791b8       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   5 seconds ago       Running             coredns                   1                   b49041ab513f7       coredns-6d4b75cb6d-drdbq
	0f1ba520f136e       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   12 seconds ago      Running             kube-proxy                1                   24aff06740d40       kube-proxy-786wt
	42feffe14312c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Exited              storage-provisioner       2                   5285626938a6c       storage-provisioner
	e1bcb1969daf7       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   c737fe252189c       kube-scheduler-test-preload-400198
	215b633a0587b       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   c7cabc746631d       etcd-test-preload-400198
	f1fa77825f6a4       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            1                   f6298aaeec657       kube-apiserver-test-preload-400198
	68ac33e8bf486       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago      Running             kube-controller-manager   1                   a3e64943eac6f       kube-controller-manager-test-preload-400198
	
	
	==> coredns [5dfbdc14791b84b851a9c965823d70adadd071511c5dc684deffd4ee6db4e313] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:46938 - 7905 "HINFO IN 7171875435110338108.1321399896836621472. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032972005s
	
	
	==> describe nodes <==
	Name:               test-preload-400198
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-400198
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743
	                    minikube.k8s.io/name=test-preload-400198
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T15_17_35_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 15:17:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-400198
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 15:19:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 15:19:07 +0000   Mon, 27 Jan 2025 15:17:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 15:19:07 +0000   Mon, 27 Jan 2025 15:17:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 15:19:07 +0000   Mon, 27 Jan 2025 15:17:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 15:19:07 +0000   Mon, 27 Jan 2025 15:19:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    test-preload-400198
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 286ffc6023924ada9192caed442c95c0
	  System UUID:                286ffc60-2392-4ada-9192-caed442c95c0
	  Boot ID:                    7382583a-936f-4363-a1b4-31b38f554c6c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-drdbq                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     85s
	  kube-system                 etcd-test-preload-400198                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         97s
	  kube-system                 kube-apiserver-test-preload-400198             250m (12%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-controller-manager-test-preload-400198    200m (10%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-proxy-786wt                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-scheduler-test-preload-400198             100m (5%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 12s                  kube-proxy       
	  Normal  Starting                 82s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  106s (x5 over 106s)  kubelet          Node test-preload-400198 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x5 over 106s)  kubelet          Node test-preload-400198 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x5 over 106s)  kubelet          Node test-preload-400198 status is now: NodeHasSufficientPID
	  Normal  Starting                 98s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  97s                  kubelet          Node test-preload-400198 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s                  kubelet          Node test-preload-400198 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s                  kubelet          Node test-preload-400198 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  97s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                87s                  kubelet          Node test-preload-400198 status is now: NodeReady
	  Normal  RegisteredNode           86s                  node-controller  Node test-preload-400198 event: Registered Node test-preload-400198 in Controller
	  Normal  Starting                 20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)    kubelet          Node test-preload-400198 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)    kubelet          Node test-preload-400198 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)    kubelet          Node test-preload-400198 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                   node-controller  Node test-preload-400198 event: Registered Node test-preload-400198 in Controller
	
	
	==> dmesg <==
	[Jan27 15:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052695] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041806] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.950879] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.810993] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.608712] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.266832] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.060013] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056331] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.190216] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.121476] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.293041] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[ +13.544295] systemd-fstab-generator[991]: Ignoring "noauto" option for root device
	[  +0.060361] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.865283] systemd-fstab-generator[1119]: Ignoring "noauto" option for root device
	[  +6.947756] kauditd_printk_skb: 108 callbacks suppressed
	[Jan27 15:19] systemd-fstab-generator[1821]: Ignoring "noauto" option for root device
	[  +6.138388] kauditd_printk_skb: 56 callbacks suppressed
	
	
	==> etcd [215b633a0587bb586aef7ed03475c7c6a274bf8047bc6c8da9ff7c1cacdd5253] <==
	{"level":"info","ts":"2025-01-27T15:18:53.556Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"c23cd90330b5fc4f","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-01-27T15:18:53.561Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-01-27T15:18:53.561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f switched to configuration voters=(13996300349686021199)"}
	{"level":"info","ts":"2025-01-27T15:18:53.563Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f81fab91992620a9","local-member-id":"c23cd90330b5fc4f","added-peer-id":"c23cd90330b5fc4f","added-peer-peer-urls":["https://192.168.39.94:2380"]}
	{"level":"info","ts":"2025-01-27T15:18:53.564Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f81fab91992620a9","local-member-id":"c23cd90330b5fc4f","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T15:18:53.564Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T15:18:53.569Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-27T15:18:53.569Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c23cd90330b5fc4f","initial-advertise-peer-urls":["https://192.168.39.94:2380"],"listen-peer-urls":["https://192.168.39.94:2380"],"advertise-client-urls":["https://192.168.39.94:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.94:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-27T15:18:53.570Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T15:18:53.570Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.94:2380"}
	{"level":"info","ts":"2025-01-27T15:18:53.570Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.94:2380"}
	{"level":"info","ts":"2025-01-27T15:18:55.096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f is starting a new election at term 2"}
	{"level":"info","ts":"2025-01-27T15:18:55.096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-27T15:18:55.096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f received MsgPreVoteResp from c23cd90330b5fc4f at term 2"}
	{"level":"info","ts":"2025-01-27T15:18:55.096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f became candidate at term 3"}
	{"level":"info","ts":"2025-01-27T15:18:55.096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f received MsgVoteResp from c23cd90330b5fc4f at term 3"}
	{"level":"info","ts":"2025-01-27T15:18:55.096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f became leader at term 3"}
	{"level":"info","ts":"2025-01-27T15:18:55.096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c23cd90330b5fc4f elected leader c23cd90330b5fc4f at term 3"}
	{"level":"info","ts":"2025-01-27T15:18:55.101Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"c23cd90330b5fc4f","local-member-attributes":"{Name:test-preload-400198 ClientURLs:[https://192.168.39.94:2379]}","request-path":"/0/members/c23cd90330b5fc4f/attributes","cluster-id":"f81fab91992620a9","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T15:18:55.101Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T15:18:55.101Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T15:18:55.102Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.94:2379"}
	{"level":"info","ts":"2025-01-27T15:18:55.103Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T15:18:55.103Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T15:18:55.104Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 15:19:12 up 0 min,  0 users,  load average: 0.49, 0.16, 0.06
	Linux test-preload-400198 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f1fa77825f6a4a3d4bb4a22bc1ba66efd4b8f7f168362d96f35d2e982129ed84] <==
	I0127 15:18:57.477009       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0127 15:18:57.466830       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 15:18:57.506213       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0127 15:18:57.506245       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0127 15:18:57.506300       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 15:18:57.508829       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 15:18:57.586102       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0127 15:18:57.586412       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0127 15:18:57.598030       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 15:18:57.608219       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E0127 15:18:57.608396       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0127 15:18:57.611499       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0127 15:18:57.670138       1 cache.go:39] Caches are synced for autoregister controller
	I0127 15:18:57.671184       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 15:18:57.673190       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0127 15:18:58.174965       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0127 15:18:58.480942       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 15:18:59.067371       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0127 15:18:59.086370       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0127 15:18:59.134265       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0127 15:18:59.167317       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 15:18:59.180595       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 15:18:59.963010       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0127 15:19:10.268545       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 15:19:10.318619       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [68ac33e8bf486b20d5d7cad9a58a8b1f8478862c059cc06620b0e6fecb9cf3ba] <==
	I0127 15:19:10.113912       1 shared_informer.go:262] Caches are synced for persistent volume
	I0127 15:19:10.114146       1 shared_informer.go:262] Caches are synced for namespace
	I0127 15:19:10.116142       1 shared_informer.go:262] Caches are synced for service account
	I0127 15:19:10.118864       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0127 15:19:10.121282       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0127 15:19:10.122881       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0127 15:19:10.134572       1 shared_informer.go:262] Caches are synced for node
	I0127 15:19:10.134672       1 range_allocator.go:173] Starting range CIDR allocator
	I0127 15:19:10.134713       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0127 15:19:10.134722       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0127 15:19:10.154866       1 shared_informer.go:262] Caches are synced for cronjob
	I0127 15:19:10.157951       1 shared_informer.go:262] Caches are synced for attach detach
	I0127 15:19:10.177649       1 shared_informer.go:262] Caches are synced for taint
	I0127 15:19:10.177921       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0127 15:19:10.178063       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-400198. Assuming now as a timestamp.
	I0127 15:19:10.178129       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0127 15:19:10.177934       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0127 15:19:10.178307       1 event.go:294] "Event occurred" object="test-preload-400198" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-400198 event: Registered Node test-preload-400198 in Controller"
	I0127 15:19:10.249149       1 shared_informer.go:262] Caches are synced for daemon sets
	I0127 15:19:10.274557       1 shared_informer.go:262] Caches are synced for stateful set
	I0127 15:19:10.324092       1 shared_informer.go:262] Caches are synced for resource quota
	I0127 15:19:10.344197       1 shared_informer.go:262] Caches are synced for resource quota
	I0127 15:19:10.745809       1 shared_informer.go:262] Caches are synced for garbage collector
	I0127 15:19:10.745944       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0127 15:19:10.758584       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [0f1ba520f136e265d2aba0e922869cc3f7ca89629eb0bd9d821a2ff556108856] <==
	I0127 15:18:59.915837       1 node.go:163] Successfully retrieved node IP: 192.168.39.94
	I0127 15:18:59.916035       1 server_others.go:138] "Detected node IP" address="192.168.39.94"
	I0127 15:18:59.916149       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0127 15:18:59.950245       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0127 15:18:59.950362       1 server_others.go:206] "Using iptables Proxier"
	I0127 15:18:59.950702       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0127 15:18:59.951414       1 server.go:661] "Version info" version="v1.24.4"
	I0127 15:18:59.951474       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 15:18:59.955578       1 config.go:317] "Starting service config controller"
	I0127 15:18:59.955863       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0127 15:18:59.955972       1 config.go:226] "Starting endpoint slice config controller"
	I0127 15:18:59.955996       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0127 15:18:59.956905       1 config.go:444] "Starting node config controller"
	I0127 15:18:59.956948       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0127 15:19:00.056134       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0127 15:19:00.056170       1 shared_informer.go:262] Caches are synced for service config
	I0127 15:19:00.057698       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [e1bcb1969daf7a00e1b7f1f8eca2105b444d0cabed75cd70b4d07233bdb253f2] <==
	I0127 15:18:53.950479       1 serving.go:348] Generated self-signed cert in-memory
	W0127 15:18:57.525893       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0127 15:18:57.526080       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 15:18:57.526169       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 15:18:57.526192       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 15:18:57.616511       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0127 15:18:57.616614       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 15:18:57.619474       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0127 15:18:57.619820       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 15:18:57.621345       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 15:18:57.619852       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0127 15:18:57.722870       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 15:18:58 test-preload-400198 kubelet[1126]: E0127 15:18:58.294009    1126 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-drdbq" podUID=8a68d285-3b27-4a75-9850-3e9f04bff887
	Jan 27 15:18:58 test-preload-400198 kubelet[1126]: I0127 15:18:58.343132    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l27zv\" (UniqueName: \"kubernetes.io/projected/8a68d285-3b27-4a75-9850-3e9f04bff887-kube-api-access-l27zv\") pod \"coredns-6d4b75cb6d-drdbq\" (UID: \"8a68d285-3b27-4a75-9850-3e9f04bff887\") " pod="kube-system/coredns-6d4b75cb6d-drdbq"
	Jan 27 15:18:58 test-preload-400198 kubelet[1126]: I0127 15:18:58.343185    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9vwb\" (UniqueName: \"kubernetes.io/projected/350e8112-2ff9-40ec-8285-848dd4f5e878-kube-api-access-d9vwb\") pod \"storage-provisioner\" (UID: \"350e8112-2ff9-40ec-8285-848dd4f5e878\") " pod="kube-system/storage-provisioner"
	Jan 27 15:18:58 test-preload-400198 kubelet[1126]: I0127 15:18:58.343210    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/da6a5f31-9169-474c-a9e3-fed9fbe14d26-kube-proxy\") pod \"kube-proxy-786wt\" (UID: \"da6a5f31-9169-474c-a9e3-fed9fbe14d26\") " pod="kube-system/kube-proxy-786wt"
	Jan 27 15:18:58 test-preload-400198 kubelet[1126]: I0127 15:18:58.343228    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da6a5f31-9169-474c-a9e3-fed9fbe14d26-xtables-lock\") pod \"kube-proxy-786wt\" (UID: \"da6a5f31-9169-474c-a9e3-fed9fbe14d26\") " pod="kube-system/kube-proxy-786wt"
	Jan 27 15:18:58 test-preload-400198 kubelet[1126]: I0127 15:18:58.343248    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a68d285-3b27-4a75-9850-3e9f04bff887-config-volume\") pod \"coredns-6d4b75cb6d-drdbq\" (UID: \"8a68d285-3b27-4a75-9850-3e9f04bff887\") " pod="kube-system/coredns-6d4b75cb6d-drdbq"
	Jan 27 15:18:58 test-preload-400198 kubelet[1126]: I0127 15:18:58.343265    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh9cj\" (UniqueName: \"kubernetes.io/projected/da6a5f31-9169-474c-a9e3-fed9fbe14d26-kube-api-access-mh9cj\") pod \"kube-proxy-786wt\" (UID: \"da6a5f31-9169-474c-a9e3-fed9fbe14d26\") " pod="kube-system/kube-proxy-786wt"
	Jan 27 15:18:58 test-preload-400198 kubelet[1126]: I0127 15:18:58.343283    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/350e8112-2ff9-40ec-8285-848dd4f5e878-tmp\") pod \"storage-provisioner\" (UID: \"350e8112-2ff9-40ec-8285-848dd4f5e878\") " pod="kube-system/storage-provisioner"
	Jan 27 15:18:58 test-preload-400198 kubelet[1126]: I0127 15:18:58.343302    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da6a5f31-9169-474c-a9e3-fed9fbe14d26-lib-modules\") pod \"kube-proxy-786wt\" (UID: \"da6a5f31-9169-474c-a9e3-fed9fbe14d26\") " pod="kube-system/kube-proxy-786wt"
	Jan 27 15:18:58 test-preload-400198 kubelet[1126]: I0127 15:18:58.343315    1126 reconciler.go:159] "Reconciler: start to sync state"
	Jan 27 15:18:58 test-preload-400198 kubelet[1126]: E0127 15:18:58.447533    1126 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 15:18:58 test-preload-400198 kubelet[1126]: E0127 15:18:58.447925    1126 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/8a68d285-3b27-4a75-9850-3e9f04bff887-config-volume podName:8a68d285-3b27-4a75-9850-3e9f04bff887 nodeName:}" failed. No retries permitted until 2025-01-27 15:18:58.947846378 +0000 UTC m=+6.825719321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8a68d285-3b27-4a75-9850-3e9f04bff887-config-volume") pod "coredns-6d4b75cb6d-drdbq" (UID: "8a68d285-3b27-4a75-9850-3e9f04bff887") : object "kube-system"/"coredns" not registered
	Jan 27 15:18:58 test-preload-400198 kubelet[1126]: E0127 15:18:58.951748    1126 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 15:18:58 test-preload-400198 kubelet[1126]: E0127 15:18:58.951900    1126 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/8a68d285-3b27-4a75-9850-3e9f04bff887-config-volume podName:8a68d285-3b27-4a75-9850-3e9f04bff887 nodeName:}" failed. No retries permitted until 2025-01-27 15:18:59.951883577 +0000 UTC m=+7.829756530 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8a68d285-3b27-4a75-9850-3e9f04bff887-config-volume") pod "coredns-6d4b75cb6d-drdbq" (UID: "8a68d285-3b27-4a75-9850-3e9f04bff887") : object "kube-system"/"coredns" not registered
	Jan 27 15:18:59 test-preload-400198 kubelet[1126]: I0127 15:18:59.423655    1126 scope.go:110] "RemoveContainer" containerID="b8198076eaad4aa22b0e11d43e6719798574340ad1b2042e7ae6cdfd8704d7ee"
	Jan 27 15:18:59 test-preload-400198 kubelet[1126]: E0127 15:18:59.959418    1126 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 15:18:59 test-preload-400198 kubelet[1126]: E0127 15:18:59.959489    1126 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/8a68d285-3b27-4a75-9850-3e9f04bff887-config-volume podName:8a68d285-3b27-4a75-9850-3e9f04bff887 nodeName:}" failed. No retries permitted until 2025-01-27 15:19:01.959473366 +0000 UTC m=+9.837346308 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8a68d285-3b27-4a75-9850-3e9f04bff887-config-volume") pod "coredns-6d4b75cb6d-drdbq" (UID: "8a68d285-3b27-4a75-9850-3e9f04bff887") : object "kube-system"/"coredns" not registered
	Jan 27 15:19:00 test-preload-400198 kubelet[1126]: E0127 15:19:00.376873    1126 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-drdbq" podUID=8a68d285-3b27-4a75-9850-3e9f04bff887
	Jan 27 15:19:00 test-preload-400198 kubelet[1126]: I0127 15:19:00.440374    1126 scope.go:110] "RemoveContainer" containerID="b8198076eaad4aa22b0e11d43e6719798574340ad1b2042e7ae6cdfd8704d7ee"
	Jan 27 15:19:00 test-preload-400198 kubelet[1126]: I0127 15:19:00.440677    1126 scope.go:110] "RemoveContainer" containerID="42feffe14312c12cc9067b9c04e6e9ec49e8c36149a0a482f482367ebb482de5"
	Jan 27 15:19:00 test-preload-400198 kubelet[1126]: E0127 15:19:00.441022    1126 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(350e8112-2ff9-40ec-8285-848dd4f5e878)\"" pod="kube-system/storage-provisioner" podUID=350e8112-2ff9-40ec-8285-848dd4f5e878
	Jan 27 15:19:01 test-preload-400198 kubelet[1126]: I0127 15:19:01.452491    1126 scope.go:110] "RemoveContainer" containerID="42feffe14312c12cc9067b9c04e6e9ec49e8c36149a0a482f482367ebb482de5"
	Jan 27 15:19:01 test-preload-400198 kubelet[1126]: E0127 15:19:01.452694    1126 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(350e8112-2ff9-40ec-8285-848dd4f5e878)\"" pod="kube-system/storage-provisioner" podUID=350e8112-2ff9-40ec-8285-848dd4f5e878
	Jan 27 15:19:01 test-preload-400198 kubelet[1126]: E0127 15:19:01.977749    1126 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 15:19:01 test-preload-400198 kubelet[1126]: E0127 15:19:01.977924    1126 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/8a68d285-3b27-4a75-9850-3e9f04bff887-config-volume podName:8a68d285-3b27-4a75-9850-3e9f04bff887 nodeName:}" failed. No retries permitted until 2025-01-27 15:19:05.977907631 +0000 UTC m=+13.855780585 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8a68d285-3b27-4a75-9850-3e9f04bff887-config-volume") pod "coredns-6d4b75cb6d-drdbq" (UID: "8a68d285-3b27-4a75-9850-3e9f04bff887") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [42feffe14312c12cc9067b9c04e6e9ec49e8c36149a0a482f482367ebb482de5] <==
	I0127 15:18:59.501257       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0127 15:18:59.504677       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-400198 -n test-preload-400198
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-400198 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-400198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-400198
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-400198: (1.144617883s)
--- FAIL: TestPreload (172.35s)

                                                
                                    
x
+
TestKubernetesUpgrade (421.18s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-878562 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-878562 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m58.622016913s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-878562] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20321
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-878562" primary control-plane node in "kubernetes-upgrade-878562" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 15:21:08.969794 1052504 out.go:345] Setting OutFile to fd 1 ...
	I0127 15:21:08.970073 1052504 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:21:08.970085 1052504 out.go:358] Setting ErrFile to fd 2...
	I0127 15:21:08.970092 1052504 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:21:08.970370 1052504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 15:21:08.971061 1052504 out.go:352] Setting JSON to false
	I0127 15:21:08.972119 1052504 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":21816,"bootTime":1737969453,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 15:21:08.972186 1052504 start.go:139] virtualization: kvm guest
	I0127 15:21:08.974669 1052504 out.go:177] * [kubernetes-upgrade-878562] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 15:21:08.976441 1052504 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 15:21:08.976457 1052504 notify.go:220] Checking for updates...
	I0127 15:21:08.979741 1052504 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 15:21:08.982160 1052504 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:21:08.985037 1052504 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:21:08.986218 1052504 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 15:21:08.987382 1052504 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 15:21:08.989178 1052504 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 15:21:09.026568 1052504 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 15:21:09.027680 1052504 start.go:297] selected driver: kvm2
	I0127 15:21:09.027693 1052504 start.go:901] validating driver "kvm2" against <nil>
	I0127 15:21:09.027709 1052504 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 15:21:09.028667 1052504 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:21:09.045606 1052504 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 15:21:09.062814 1052504 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 15:21:09.062877 1052504 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 15:21:09.063229 1052504 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 15:21:09.063267 1052504 cni.go:84] Creating CNI manager for ""
	I0127 15:21:09.063335 1052504 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:21:09.063344 1052504 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 15:21:09.063421 1052504 start.go:340] cluster config:
	{Name:kubernetes-upgrade-878562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-878562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:21:09.063582 1052504 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:21:09.065342 1052504 out.go:177] * Starting "kubernetes-upgrade-878562" primary control-plane node in "kubernetes-upgrade-878562" cluster
	I0127 15:21:09.066689 1052504 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 15:21:09.066757 1052504 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 15:21:09.066770 1052504 cache.go:56] Caching tarball of preloaded images
	I0127 15:21:09.066925 1052504 preload.go:172] Found /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 15:21:09.066941 1052504 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 15:21:09.067399 1052504 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/config.json ...
	I0127 15:21:09.067434 1052504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/config.json: {Name:mk1000e80954c34f490d3948452c1ea49f857f47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:21:09.067603 1052504 start.go:360] acquireMachinesLock for kubernetes-upgrade-878562: {Name:mk884e36253ca066a698970989f20649e5f9cbef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 15:21:34.110042 1052504 start.go:364] duration metric: took 25.042397219s to acquireMachinesLock for "kubernetes-upgrade-878562"
	I0127 15:21:34.110132 1052504 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-878562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernete
s-upgrade-878562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 15:21:34.110252 1052504 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 15:21:34.112319 1052504 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 15:21:34.112528 1052504 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:21:34.112585 1052504 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:21:34.129662 1052504 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39125
	I0127 15:21:34.130092 1052504 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:21:34.130738 1052504 main.go:141] libmachine: Using API Version  1
	I0127 15:21:34.130765 1052504 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:21:34.131108 1052504 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:21:34.131408 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetMachineName
	I0127 15:21:34.131623 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .DriverName
	I0127 15:21:34.131811 1052504 start.go:159] libmachine.API.Create for "kubernetes-upgrade-878562" (driver="kvm2")
	I0127 15:21:34.131846 1052504 client.go:168] LocalClient.Create starting
	I0127 15:21:34.131896 1052504 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem
	I0127 15:21:34.131940 1052504 main.go:141] libmachine: Decoding PEM data...
	I0127 15:21:34.131963 1052504 main.go:141] libmachine: Parsing certificate...
	I0127 15:21:34.132038 1052504 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem
	I0127 15:21:34.132063 1052504 main.go:141] libmachine: Decoding PEM data...
	I0127 15:21:34.132080 1052504 main.go:141] libmachine: Parsing certificate...
	I0127 15:21:34.132105 1052504 main.go:141] libmachine: Running pre-create checks...
	I0127 15:21:34.132119 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .PreCreateCheck
	I0127 15:21:34.132485 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetConfigRaw
	I0127 15:21:34.132949 1052504 main.go:141] libmachine: Creating machine...
	I0127 15:21:34.132969 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .Create
	I0127 15:21:34.133117 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) creating KVM machine...
	I0127 15:21:34.133137 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) creating network...
	I0127 15:21:34.134259 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found existing default KVM network
	I0127 15:21:34.135224 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:34.135070 1054919 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fa:4b:dd} reservation:<nil>}
	I0127 15:21:34.135992 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:34.135900 1054919 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00024a660}
	I0127 15:21:34.136019 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | created network xml: 
	I0127 15:21:34.136032 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | <network>
	I0127 15:21:34.136046 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG |   <name>mk-kubernetes-upgrade-878562</name>
	I0127 15:21:34.136056 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG |   <dns enable='no'/>
	I0127 15:21:34.136065 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG |   
	I0127 15:21:34.136085 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0127 15:21:34.136091 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG |     <dhcp>
	I0127 15:21:34.136103 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0127 15:21:34.136112 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG |     </dhcp>
	I0127 15:21:34.136141 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG |   </ip>
	I0127 15:21:34.136170 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG |   
	I0127 15:21:34.136182 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | </network>
	I0127 15:21:34.136190 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | 
	I0127 15:21:34.141470 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | trying to create private KVM network mk-kubernetes-upgrade-878562 192.168.50.0/24...
	I0127 15:21:34.212431 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | private KVM network mk-kubernetes-upgrade-878562 192.168.50.0/24 created
	I0127 15:21:34.212465 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:34.212389 1054919 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:21:34.212479 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) setting up store path in /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/kubernetes-upgrade-878562 ...
	I0127 15:21:34.212494 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) building disk image from file:///home/jenkins/minikube-integration/20321-1005652/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 15:21:34.212560 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Downloading /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20321-1005652/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 15:21:34.514998 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:34.514857 1054919 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/kubernetes-upgrade-878562/id_rsa...
	I0127 15:21:34.644956 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:34.644809 1054919 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/kubernetes-upgrade-878562/kubernetes-upgrade-878562.rawdisk...
	I0127 15:21:34.644991 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | Writing magic tar header
	I0127 15:21:34.645021 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | Writing SSH key tar header
	I0127 15:21:34.645035 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:34.644927 1054919 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/kubernetes-upgrade-878562 ...
	I0127 15:21:34.645102 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/kubernetes-upgrade-878562
	I0127 15:21:34.645155 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/kubernetes-upgrade-878562 (perms=drwx------)
	I0127 15:21:34.645171 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines (perms=drwxr-xr-x)
	I0127 15:21:34.645179 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines
	I0127 15:21:34.645189 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:21:34.645216 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652
	I0127 15:21:34.645226 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 15:21:34.645231 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | checking permissions on dir: /home/jenkins
	I0127 15:21:34.645242 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | checking permissions on dir: /home
	I0127 15:21:34.645252 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | skipping /home - not owner
	I0127 15:21:34.645277 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube (perms=drwxr-xr-x)
	I0127 15:21:34.645302 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652 (perms=drwxrwxr-x)
	I0127 15:21:34.645311 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 15:21:34.645331 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 15:21:34.645353 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) creating domain...
	I0127 15:21:34.646367 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) define libvirt domain using xml: 
	I0127 15:21:34.646385 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) <domain type='kvm'>
	I0127 15:21:34.646395 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)   <name>kubernetes-upgrade-878562</name>
	I0127 15:21:34.646402 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)   <memory unit='MiB'>2200</memory>
	I0127 15:21:34.646411 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)   <vcpu>2</vcpu>
	I0127 15:21:34.646423 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)   <features>
	I0127 15:21:34.646431 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     <acpi/>
	I0127 15:21:34.646450 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     <apic/>
	I0127 15:21:34.646462 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     <pae/>
	I0127 15:21:34.646477 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     
	I0127 15:21:34.646487 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)   </features>
	I0127 15:21:34.646492 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)   <cpu mode='host-passthrough'>
	I0127 15:21:34.646514 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)   
	I0127 15:21:34.646536 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)   </cpu>
	I0127 15:21:34.646549 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)   <os>
	I0127 15:21:34.646557 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     <type>hvm</type>
	I0127 15:21:34.646567 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     <boot dev='cdrom'/>
	I0127 15:21:34.646572 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     <boot dev='hd'/>
	I0127 15:21:34.646577 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     <bootmenu enable='no'/>
	I0127 15:21:34.646584 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)   </os>
	I0127 15:21:34.646589 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)   <devices>
	I0127 15:21:34.646596 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     <disk type='file' device='cdrom'>
	I0127 15:21:34.646604 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)       <source file='/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/kubernetes-upgrade-878562/boot2docker.iso'/>
	I0127 15:21:34.646619 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)       <target dev='hdc' bus='scsi'/>
	I0127 15:21:34.646633 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)       <readonly/>
	I0127 15:21:34.646640 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     </disk>
	I0127 15:21:34.646653 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     <disk type='file' device='disk'>
	I0127 15:21:34.646666 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 15:21:34.646686 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)       <source file='/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/kubernetes-upgrade-878562/kubernetes-upgrade-878562.rawdisk'/>
	I0127 15:21:34.646698 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)       <target dev='hda' bus='virtio'/>
	I0127 15:21:34.646704 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     </disk>
	I0127 15:21:34.646713 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     <interface type='network'>
	I0127 15:21:34.646724 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)       <source network='mk-kubernetes-upgrade-878562'/>
	I0127 15:21:34.646735 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)       <model type='virtio'/>
	I0127 15:21:34.646745 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     </interface>
	I0127 15:21:34.646756 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     <interface type='network'>
	I0127 15:21:34.646768 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)       <source network='default'/>
	I0127 15:21:34.646783 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)       <model type='virtio'/>
	I0127 15:21:34.646794 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     </interface>
	I0127 15:21:34.646804 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     <serial type='pty'>
	I0127 15:21:34.646810 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)       <target port='0'/>
	I0127 15:21:34.646819 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     </serial>
	I0127 15:21:34.646834 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     <console type='pty'>
	I0127 15:21:34.646846 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)       <target type='serial' port='0'/>
	I0127 15:21:34.646883 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     </console>
	I0127 15:21:34.646905 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     <rng model='virtio'>
	I0127 15:21:34.646921 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)       <backend model='random'>/dev/random</backend>
	I0127 15:21:34.646937 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     </rng>
	I0127 15:21:34.646949 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     
	I0127 15:21:34.646964 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)     
	I0127 15:21:34.646976 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562)   </devices>
	I0127 15:21:34.646987 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) </domain>
	I0127 15:21:34.646999 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) 
	I0127 15:21:34.651781 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:f0:7b:f0 in network default
	I0127 15:21:34.652351 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) starting domain...
	I0127 15:21:34.652386 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:21:34.652396 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) ensuring networks are active...
	I0127 15:21:34.653094 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Ensuring network default is active
	I0127 15:21:34.653435 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Ensuring network mk-kubernetes-upgrade-878562 is active
	I0127 15:21:34.654044 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) getting domain XML...
	I0127 15:21:34.654869 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) creating domain...
	I0127 15:21:36.009798 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) waiting for IP...
	I0127 15:21:36.010890 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:21:36.011307 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | unable to find current IP address of domain kubernetes-upgrade-878562 in network mk-kubernetes-upgrade-878562
	I0127 15:21:36.011387 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:36.011328 1054919 retry.go:31] will retry after 274.504455ms: waiting for domain to come up
	I0127 15:21:36.288050 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:21:36.288632 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | unable to find current IP address of domain kubernetes-upgrade-878562 in network mk-kubernetes-upgrade-878562
	I0127 15:21:36.288689 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:36.288609 1054919 retry.go:31] will retry after 270.551815ms: waiting for domain to come up
	I0127 15:21:36.562691 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:21:36.563204 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | unable to find current IP address of domain kubernetes-upgrade-878562 in network mk-kubernetes-upgrade-878562
	I0127 15:21:36.563232 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:36.563179 1054919 retry.go:31] will retry after 488.196737ms: waiting for domain to come up
	I0127 15:21:37.052795 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:21:37.053256 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | unable to find current IP address of domain kubernetes-upgrade-878562 in network mk-kubernetes-upgrade-878562
	I0127 15:21:37.053293 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:37.053218 1054919 retry.go:31] will retry after 430.506382ms: waiting for domain to come up
	I0127 15:21:37.486088 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:21:37.486677 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | unable to find current IP address of domain kubernetes-upgrade-878562 in network mk-kubernetes-upgrade-878562
	I0127 15:21:37.486706 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:37.486622 1054919 retry.go:31] will retry after 620.139263ms: waiting for domain to come up
	I0127 15:21:38.108502 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:21:38.108993 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | unable to find current IP address of domain kubernetes-upgrade-878562 in network mk-kubernetes-upgrade-878562
	I0127 15:21:38.109060 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:38.108939 1054919 retry.go:31] will retry after 603.436003ms: waiting for domain to come up
	I0127 15:21:38.713986 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:21:38.714489 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | unable to find current IP address of domain kubernetes-upgrade-878562 in network mk-kubernetes-upgrade-878562
	I0127 15:21:38.714518 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:38.714452 1054919 retry.go:31] will retry after 1.181288085s: waiting for domain to come up
	I0127 15:21:39.897840 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:21:39.898297 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | unable to find current IP address of domain kubernetes-upgrade-878562 in network mk-kubernetes-upgrade-878562
	I0127 15:21:39.898323 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:39.898278 1054919 retry.go:31] will retry after 1.405867036s: waiting for domain to come up
	I0127 15:21:41.305612 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:21:41.306136 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | unable to find current IP address of domain kubernetes-upgrade-878562 in network mk-kubernetes-upgrade-878562
	I0127 15:21:41.306165 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:41.306117 1054919 retry.go:31] will retry after 1.31343053s: waiting for domain to come up
	I0127 15:21:42.621618 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:21:42.622128 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | unable to find current IP address of domain kubernetes-upgrade-878562 in network mk-kubernetes-upgrade-878562
	I0127 15:21:42.622153 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:42.622125 1054919 retry.go:31] will retry after 2.321318154s: waiting for domain to come up
	I0127 15:21:44.945042 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:21:44.945631 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | unable to find current IP address of domain kubernetes-upgrade-878562 in network mk-kubernetes-upgrade-878562
	I0127 15:21:44.945661 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:44.945609 1054919 retry.go:31] will retry after 1.90927123s: waiting for domain to come up
	I0127 15:21:46.857232 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:21:46.857774 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | unable to find current IP address of domain kubernetes-upgrade-878562 in network mk-kubernetes-upgrade-878562
	I0127 15:21:46.857807 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:46.857730 1054919 retry.go:31] will retry after 2.452357003s: waiting for domain to come up
	I0127 15:21:49.311924 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:21:49.312324 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | unable to find current IP address of domain kubernetes-upgrade-878562 in network mk-kubernetes-upgrade-878562
	I0127 15:21:49.312356 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:49.312273 1054919 retry.go:31] will retry after 3.35979957s: waiting for domain to come up
	I0127 15:21:52.674777 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:21:52.675241 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | unable to find current IP address of domain kubernetes-upgrade-878562 in network mk-kubernetes-upgrade-878562
	I0127 15:21:52.675315 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | I0127 15:21:52.675243 1054919 retry.go:31] will retry after 4.279177779s: waiting for domain to come up
	I0127 15:21:56.956081 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:21:56.956532 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) found domain IP: 192.168.50.193
	I0127 15:21:56.956553 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) reserving static IP address...
	I0127 15:21:56.956568 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has current primary IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:21:56.956925 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-878562", mac: "52:54:00:32:1e:51", ip: "192.168.50.193"} in network mk-kubernetes-upgrade-878562
	I0127 15:21:57.034878 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) reserved static IP address 192.168.50.193 for domain kubernetes-upgrade-878562
	I0127 15:21:57.034907 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) waiting for SSH...
	I0127 15:21:57.034939 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | Getting to WaitForSSH function...
	I0127 15:21:57.038154 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:21:57.038599 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562
	I0127 15:21:57.038634 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-878562 interface with MAC address 52:54:00:32:1e:51
	I0127 15:21:57.038735 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | Using SSH client type: external
	I0127 15:21:57.038762 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | Using SSH private key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/kubernetes-upgrade-878562/id_rsa (-rw-------)
	I0127 15:21:57.038803 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/kubernetes-upgrade-878562/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 15:21:57.038828 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | About to run SSH command:
	I0127 15:21:57.038850 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | exit 0
	I0127 15:21:57.042599 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | SSH cmd err, output: exit status 255: 
	I0127 15:21:57.042628 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0127 15:21:57.042638 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | command : exit 0
	I0127 15:21:57.042646 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | err     : exit status 255
	I0127 15:21:57.042657 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | output  : 
	I0127 15:22:00.043259 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | Getting to WaitForSSH function...
	I0127 15:22:00.045897 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:00.046303 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:21:50 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:22:00.046340 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:00.046471 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | Using SSH client type: external
	I0127 15:22:00.046497 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | Using SSH private key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/kubernetes-upgrade-878562/id_rsa (-rw-------)
	I0127 15:22:00.046528 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/kubernetes-upgrade-878562/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 15:22:00.046541 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | About to run SSH command:
	I0127 15:22:00.046555 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | exit 0
	I0127 15:22:00.173226 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | SSH cmd err, output: <nil>: 
	I0127 15:22:00.173510 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) KVM machine creation complete
	I0127 15:22:00.173750 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetConfigRaw
	I0127 15:22:00.174467 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .DriverName
	I0127 15:22:00.174661 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .DriverName
	I0127 15:22:00.174807 1052504 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 15:22:00.174819 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetState
	I0127 15:22:00.176105 1052504 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 15:22:00.176120 1052504 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 15:22:00.176125 1052504 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 15:22:00.176130 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHHostname
	I0127 15:22:00.178409 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:00.178695 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:21:50 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:22:00.178725 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:00.178862 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHPort
	I0127 15:22:00.179049 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:22:00.179225 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:22:00.179368 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHUsername
	I0127 15:22:00.179553 1052504 main.go:141] libmachine: Using SSH client type: native
	I0127 15:22:00.179741 1052504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 15:22:00.179751 1052504 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 15:22:00.296584 1052504 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 15:22:00.296613 1052504 main.go:141] libmachine: Detecting the provisioner...
	I0127 15:22:00.296621 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHHostname
	I0127 15:22:00.299727 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:00.300086 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:21:50 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:22:00.300124 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:00.300323 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHPort
	I0127 15:22:00.300534 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:22:00.300709 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:22:00.300853 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHUsername
	I0127 15:22:00.301162 1052504 main.go:141] libmachine: Using SSH client type: native
	I0127 15:22:00.301391 1052504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 15:22:00.301404 1052504 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 15:22:00.418289 1052504 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 15:22:00.418386 1052504 main.go:141] libmachine: found compatible host: buildroot
	I0127 15:22:00.418396 1052504 main.go:141] libmachine: Provisioning with buildroot...
	I0127 15:22:00.418405 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetMachineName
	I0127 15:22:00.418676 1052504 buildroot.go:166] provisioning hostname "kubernetes-upgrade-878562"
	I0127 15:22:00.418714 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetMachineName
	I0127 15:22:00.418902 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHHostname
	I0127 15:22:00.421564 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:00.421920 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:21:50 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:22:00.421948 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:00.422085 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHPort
	I0127 15:22:00.422269 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:22:00.422442 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:22:00.422554 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHUsername
	I0127 15:22:00.422815 1052504 main.go:141] libmachine: Using SSH client type: native
	I0127 15:22:00.423021 1052504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 15:22:00.423035 1052504 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-878562 && echo "kubernetes-upgrade-878562" | sudo tee /etc/hostname
	I0127 15:22:00.552031 1052504 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-878562
	
	I0127 15:22:00.552057 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHHostname
	I0127 15:22:00.555290 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:00.555666 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:21:50 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:22:00.555697 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:00.555938 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHPort
	I0127 15:22:00.556159 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:22:00.556316 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:22:00.556496 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHUsername
	I0127 15:22:00.556712 1052504 main.go:141] libmachine: Using SSH client type: native
	I0127 15:22:00.556886 1052504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 15:22:00.556908 1052504 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-878562' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-878562/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-878562' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 15:22:00.682886 1052504 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 15:22:00.682919 1052504 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20321-1005652/.minikube CaCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20321-1005652/.minikube}
	I0127 15:22:00.682940 1052504 buildroot.go:174] setting up certificates
	I0127 15:22:00.682952 1052504 provision.go:84] configureAuth start
	I0127 15:22:00.682963 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetMachineName
	I0127 15:22:00.683328 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetIP
	I0127 15:22:00.685991 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:00.686396 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:21:50 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:22:00.686428 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:00.686569 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHHostname
	I0127 15:22:00.688926 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:00.689236 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:21:50 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:22:00.689289 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:00.689409 1052504 provision.go:143] copyHostCerts
	I0127 15:22:00.689568 1052504 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem, removing ...
	I0127 15:22:00.689604 1052504 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem
	I0127 15:22:00.689680 1052504 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem (1679 bytes)
	I0127 15:22:00.689799 1052504 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem, removing ...
	I0127 15:22:00.689809 1052504 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem
	I0127 15:22:00.689838 1052504 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem (1078 bytes)
	I0127 15:22:00.689926 1052504 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem, removing ...
	I0127 15:22:00.689937 1052504 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem
	I0127 15:22:00.690017 1052504 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem (1123 bytes)
	I0127 15:22:00.690097 1052504 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-878562 san=[127.0.0.1 192.168.50.193 kubernetes-upgrade-878562 localhost minikube]
	I0127 15:22:00.788361 1052504 provision.go:177] copyRemoteCerts
	I0127 15:22:00.788418 1052504 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 15:22:00.788445 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHHostname
	I0127 15:22:00.791246 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:00.791539 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:21:50 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:22:00.791589 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:00.791744 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHPort
	I0127 15:22:00.791938 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:22:00.792104 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHUsername
	I0127 15:22:00.792239 1052504 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/kubernetes-upgrade-878562/id_rsa Username:docker}
	I0127 15:22:00.879677 1052504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 15:22:00.905217 1052504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0127 15:22:00.929822 1052504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 15:22:00.954320 1052504 provision.go:87] duration metric: took 271.331953ms to configureAuth
	I0127 15:22:00.954355 1052504 buildroot.go:189] setting minikube options for container-runtime
	I0127 15:22:00.954585 1052504 config.go:182] Loaded profile config "kubernetes-upgrade-878562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 15:22:00.954765 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHHostname
	I0127 15:22:00.957349 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:00.957667 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:21:50 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:22:00.957697 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:00.957853 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHPort
	I0127 15:22:00.958063 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:22:00.958219 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:22:00.958328 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHUsername
	I0127 15:22:00.958458 1052504 main.go:141] libmachine: Using SSH client type: native
	I0127 15:22:00.958638 1052504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 15:22:00.958651 1052504 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 15:22:01.189546 1052504 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 15:22:01.189576 1052504 main.go:141] libmachine: Checking connection to Docker...
	I0127 15:22:01.189588 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetURL
	I0127 15:22:01.191071 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | using libvirt version 6000000
	I0127 15:22:01.193722 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:01.194078 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:21:50 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:22:01.194104 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:01.194308 1052504 main.go:141] libmachine: Docker is up and running!
	I0127 15:22:01.194327 1052504 main.go:141] libmachine: Reticulating splines...
	I0127 15:22:01.194336 1052504 client.go:171] duration metric: took 27.062477735s to LocalClient.Create
	I0127 15:22:01.194364 1052504 start.go:167] duration metric: took 27.062555613s to libmachine.API.Create "kubernetes-upgrade-878562"
	I0127 15:22:01.194376 1052504 start.go:293] postStartSetup for "kubernetes-upgrade-878562" (driver="kvm2")
	I0127 15:22:01.194388 1052504 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 15:22:01.194405 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .DriverName
	I0127 15:22:01.194669 1052504 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 15:22:01.194694 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHHostname
	I0127 15:22:01.197266 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:01.197578 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:21:50 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:22:01.197606 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:01.197812 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHPort
	I0127 15:22:01.198028 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:22:01.198218 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHUsername
	I0127 15:22:01.198363 1052504 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/kubernetes-upgrade-878562/id_rsa Username:docker}
	I0127 15:22:01.287536 1052504 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 15:22:01.292269 1052504 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 15:22:01.292301 1052504 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/addons for local assets ...
	I0127 15:22:01.292383 1052504 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/files for local assets ...
	I0127 15:22:01.292463 1052504 filesync.go:149] local asset: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem -> 10128162.pem in /etc/ssl/certs
	I0127 15:22:01.292558 1052504 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 15:22:01.302744 1052504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:22:01.331234 1052504 start.go:296] duration metric: took 136.839848ms for postStartSetup
	I0127 15:22:01.331320 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetConfigRaw
	I0127 15:22:01.331968 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetIP
	I0127 15:22:01.334923 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:01.335278 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:21:50 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:22:01.335301 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:01.335655 1052504 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/config.json ...
	I0127 15:22:01.335858 1052504 start.go:128] duration metric: took 27.225592173s to createHost
	I0127 15:22:01.335884 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHHostname
	I0127 15:22:01.338482 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:01.338870 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:21:50 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:22:01.338902 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:01.339075 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHPort
	I0127 15:22:01.339281 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:22:01.339449 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:22:01.339607 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHUsername
	I0127 15:22:01.339822 1052504 main.go:141] libmachine: Using SSH client type: native
	I0127 15:22:01.340004 1052504 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 15:22:01.340014 1052504 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 15:22:01.458101 1052504 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737991321.433742675
	
	I0127 15:22:01.458123 1052504 fix.go:216] guest clock: 1737991321.433742675
	I0127 15:22:01.458130 1052504 fix.go:229] Guest: 2025-01-27 15:22:01.433742675 +0000 UTC Remote: 2025-01-27 15:22:01.335871255 +0000 UTC m=+52.419216681 (delta=97.87142ms)
	I0127 15:22:01.458158 1052504 fix.go:200] guest clock delta is within tolerance: 97.87142ms
	I0127 15:22:01.458163 1052504 start.go:83] releasing machines lock for "kubernetes-upgrade-878562", held for 27.348072259s
	I0127 15:22:01.458185 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .DriverName
	I0127 15:22:01.458471 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetIP
	I0127 15:22:01.461409 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:01.461848 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:21:50 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:22:01.461885 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:01.462089 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .DriverName
	I0127 15:22:01.462729 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .DriverName
	I0127 15:22:01.463013 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .DriverName
	I0127 15:22:01.463154 1052504 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 15:22:01.463219 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHHostname
	I0127 15:22:01.463302 1052504 ssh_runner.go:195] Run: cat /version.json
	I0127 15:22:01.463331 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHHostname
	I0127 15:22:01.466423 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:01.466693 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:01.466796 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:21:50 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:22:01.466822 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:01.467074 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:21:50 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:22:01.467102 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHPort
	I0127 15:22:01.467122 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:01.467271 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHPort
	I0127 15:22:01.467359 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:22:01.467470 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:22:01.467496 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHUsername
	I0127 15:22:01.467597 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHUsername
	I0127 15:22:01.467681 1052504 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/kubernetes-upgrade-878562/id_rsa Username:docker}
	I0127 15:22:01.467799 1052504 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/kubernetes-upgrade-878562/id_rsa Username:docker}
	I0127 15:22:01.554786 1052504 ssh_runner.go:195] Run: systemctl --version
	I0127 15:22:01.577625 1052504 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 15:22:01.739359 1052504 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 15:22:01.746839 1052504 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 15:22:01.746935 1052504 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 15:22:01.771888 1052504 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 15:22:01.771918 1052504 start.go:495] detecting cgroup driver to use...
	I0127 15:22:01.772004 1052504 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 15:22:01.793163 1052504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 15:22:01.808401 1052504 docker.go:217] disabling cri-docker service (if available) ...
	I0127 15:22:01.808467 1052504 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 15:22:01.824099 1052504 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 15:22:01.843310 1052504 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 15:22:01.971390 1052504 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 15:22:02.111223 1052504 docker.go:233] disabling docker service ...
	I0127 15:22:02.111313 1052504 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 15:22:02.127071 1052504 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 15:22:02.141840 1052504 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 15:22:02.278210 1052504 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 15:22:02.416792 1052504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 15:22:02.432207 1052504 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 15:22:02.451685 1052504 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 15:22:02.451762 1052504 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:22:02.462650 1052504 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 15:22:02.462719 1052504 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:22:02.474047 1052504 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:22:02.486311 1052504 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:22:02.498467 1052504 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 15:22:02.510593 1052504 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 15:22:02.523643 1052504 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 15:22:02.523721 1052504 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 15:22:02.539759 1052504 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 15:22:02.551202 1052504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:22:02.681125 1052504 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 15:22:02.786326 1052504 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 15:22:02.786414 1052504 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 15:22:02.791283 1052504 start.go:563] Will wait 60s for crictl version
	I0127 15:22:02.791346 1052504 ssh_runner.go:195] Run: which crictl
	I0127 15:22:02.795634 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 15:22:02.851416 1052504 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 15:22:02.851519 1052504 ssh_runner.go:195] Run: crio --version
	I0127 15:22:02.881190 1052504 ssh_runner.go:195] Run: crio --version
	I0127 15:22:02.922172 1052504 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 15:22:02.923376 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetIP
	I0127 15:22:02.926639 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:02.927060 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:21:50 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:22:02.927093 1052504 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:22:02.927309 1052504 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 15:22:02.932610 1052504 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:22:02.947222 1052504 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-878562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-878562 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 15:22:02.947337 1052504 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 15:22:02.947388 1052504 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:22:02.986309 1052504 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 15:22:02.986380 1052504 ssh_runner.go:195] Run: which lz4
	I0127 15:22:02.990919 1052504 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 15:22:02.995405 1052504 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 15:22:02.995441 1052504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 15:22:04.845664 1052504 crio.go:462] duration metric: took 1.854813665s to copy over tarball
	I0127 15:22:04.845765 1052504 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 15:22:07.642740 1052504 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.796934495s)
	I0127 15:22:07.642776 1052504 crio.go:469] duration metric: took 2.797075758s to extract the tarball
	I0127 15:22:07.642786 1052504 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 15:22:07.686953 1052504 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:22:07.739917 1052504 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 15:22:07.739947 1052504 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 15:22:07.740035 1052504 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:22:07.740051 1052504 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:22:07.740069 1052504 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 15:22:07.740088 1052504 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 15:22:07.740060 1052504 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:22:07.740131 1052504 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:22:07.740165 1052504 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:22:07.740246 1052504 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 15:22:07.741584 1052504 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 15:22:07.741950 1052504 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:22:07.742070 1052504 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:22:07.742114 1052504 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:22:07.741585 1052504 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:22:07.742268 1052504 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 15:22:07.742673 1052504 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 15:22:07.742873 1052504 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:22:07.977968 1052504 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 15:22:07.988369 1052504 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 15:22:08.005784 1052504 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 15:22:08.013860 1052504 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:22:08.016948 1052504 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:22:08.024739 1052504 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:22:08.046424 1052504 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 15:22:08.046489 1052504 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 15:22:08.046539 1052504 ssh_runner.go:195] Run: which crictl
	I0127 15:22:08.084329 1052504 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:22:08.114939 1052504 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 15:22:08.114997 1052504 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 15:22:08.115052 1052504 ssh_runner.go:195] Run: which crictl
	I0127 15:22:08.174544 1052504 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 15:22:08.174595 1052504 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 15:22:08.174643 1052504 ssh_runner.go:195] Run: which crictl
	I0127 15:22:08.190268 1052504 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 15:22:08.190326 1052504 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:22:08.190352 1052504 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 15:22:08.190386 1052504 ssh_runner.go:195] Run: which crictl
	I0127 15:22:08.190393 1052504 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:22:08.190438 1052504 ssh_runner.go:195] Run: which crictl
	I0127 15:22:08.197855 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 15:22:08.197886 1052504 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 15:22:08.197902 1052504 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 15:22:08.197934 1052504 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:22:08.197960 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 15:22:08.197965 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 15:22:08.197972 1052504 ssh_runner.go:195] Run: which crictl
	I0127 15:22:08.197939 1052504 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:22:08.198004 1052504 ssh_runner.go:195] Run: which crictl
	I0127 15:22:08.203200 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:22:08.203211 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:22:08.218159 1052504 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:22:08.310051 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 15:22:08.310143 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 15:22:08.349889 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 15:22:08.349891 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:22:08.350008 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:22:08.349973 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:22:08.370838 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:22:08.491669 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 15:22:08.506752 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 15:22:08.537553 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 15:22:08.543435 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:22:08.543449 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:22:08.543546 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:22:08.546614 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:22:08.626684 1052504 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 15:22:08.670041 1052504 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 15:22:08.694357 1052504 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 15:22:08.694603 1052504 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 15:22:08.696109 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:22:08.698339 1052504 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:22:08.698352 1052504 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 15:22:08.738355 1052504 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 15:22:08.743003 1052504 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 15:22:08.743075 1052504 cache_images.go:92] duration metric: took 1.003108633s to LoadCachedImages
	W0127 15:22:08.743179 1052504 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0127 15:22:08.743200 1052504 kubeadm.go:934] updating node { 192.168.50.193 8443 v1.20.0 crio true true} ...
	I0127 15:22:08.743379 1052504 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-878562 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-878562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 15:22:08.743468 1052504 ssh_runner.go:195] Run: crio config
	I0127 15:22:08.800217 1052504 cni.go:84] Creating CNI manager for ""
	I0127 15:22:08.800247 1052504 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:22:08.800261 1052504 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 15:22:08.800290 1052504 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.193 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-878562 NodeName:kubernetes-upgrade-878562 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 15:22:08.800478 1052504 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-878562"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 15:22:08.800566 1052504 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 15:22:08.811713 1052504 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 15:22:08.811792 1052504 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 15:22:08.823472 1052504 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0127 15:22:08.841618 1052504 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 15:22:08.860682 1052504 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0127 15:22:08.882952 1052504 ssh_runner.go:195] Run: grep 192.168.50.193	control-plane.minikube.internal$ /etc/hosts
	I0127 15:22:08.887548 1052504 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:22:08.902928 1052504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:22:09.020292 1052504 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:22:09.037687 1052504 certs.go:68] Setting up /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562 for IP: 192.168.50.193
	I0127 15:22:09.037715 1052504 certs.go:194] generating shared ca certs ...
	I0127 15:22:09.037733 1052504 certs.go:226] acquiring lock for ca certs: {Name:mk0f815247e4bef2bde3ddcae95639d1cd9cab24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:22:09.037895 1052504 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key
	I0127 15:22:09.037941 1052504 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key
	I0127 15:22:09.037950 1052504 certs.go:256] generating profile certs ...
	I0127 15:22:09.038008 1052504 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/client.key
	I0127 15:22:09.038025 1052504 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/client.crt with IP's: []
	I0127 15:22:09.164496 1052504 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/client.crt ...
	I0127 15:22:09.164527 1052504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/client.crt: {Name:mkdda41b2e74b83168b635b4777c8ebbffa6fbc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:22:09.164748 1052504 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/client.key ...
	I0127 15:22:09.164769 1052504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/client.key: {Name:mkbe22fd77669ae7c3f946c8542d0c11b81177c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:22:09.164889 1052504 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/apiserver.key.b9edc8c3
	I0127 15:22:09.164910 1052504 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/apiserver.crt.b9edc8c3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.193]
	I0127 15:22:09.337790 1052504 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/apiserver.crt.b9edc8c3 ...
	I0127 15:22:09.337824 1052504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/apiserver.crt.b9edc8c3: {Name:mkebf4c90fcdc9ea95fd3a5b495cf6cbba98ae3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:22:09.337998 1052504 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/apiserver.key.b9edc8c3 ...
	I0127 15:22:09.338012 1052504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/apiserver.key.b9edc8c3: {Name:mk1e6fa69d906e0d199bf60dd5edd9c1fb8dba6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:22:09.338091 1052504 certs.go:381] copying /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/apiserver.crt.b9edc8c3 -> /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/apiserver.crt
	I0127 15:22:09.338186 1052504 certs.go:385] copying /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/apiserver.key.b9edc8c3 -> /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/apiserver.key
	I0127 15:22:09.338251 1052504 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/proxy-client.key
	I0127 15:22:09.338267 1052504 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/proxy-client.crt with IP's: []
	I0127 15:22:09.467542 1052504 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/proxy-client.crt ...
	I0127 15:22:09.467576 1052504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/proxy-client.crt: {Name:mk1ec26ed875a31e34540a80b9b11802b4adcf07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:22:09.467756 1052504 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/proxy-client.key ...
	I0127 15:22:09.467769 1052504 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/proxy-client.key: {Name:mkbbc360ad6cccf22660bc180f188baf7ac5e2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:22:09.467953 1052504 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem (1338 bytes)
	W0127 15:22:09.468003 1052504 certs.go:480] ignoring /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816_empty.pem, impossibly tiny 0 bytes
	I0127 15:22:09.468015 1052504 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 15:22:09.468037 1052504 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem (1078 bytes)
	I0127 15:22:09.468061 1052504 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem (1123 bytes)
	I0127 15:22:09.468084 1052504 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem (1679 bytes)
	I0127 15:22:09.468125 1052504 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:22:09.468726 1052504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 15:22:09.496002 1052504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 15:22:09.522489 1052504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 15:22:09.547371 1052504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 15:22:09.571265 1052504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0127 15:22:09.596669 1052504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 15:22:09.623569 1052504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 15:22:09.653340 1052504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 15:22:09.681517 1052504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /usr/share/ca-certificates/10128162.pem (1708 bytes)
	I0127 15:22:09.709870 1052504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 15:22:09.737493 1052504 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem --> /usr/share/ca-certificates/1012816.pem (1338 bytes)
	I0127 15:22:09.764343 1052504 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 15:22:09.784593 1052504 ssh_runner.go:195] Run: openssl version
	I0127 15:22:09.791175 1052504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 15:22:09.803161 1052504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:22:09.808084 1052504 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 14:06 /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:22:09.808156 1052504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:22:09.814198 1052504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 15:22:09.825793 1052504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1012816.pem && ln -fs /usr/share/ca-certificates/1012816.pem /etc/ssl/certs/1012816.pem"
	I0127 15:22:09.840088 1052504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1012816.pem
	I0127 15:22:09.846057 1052504 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 14:20 /usr/share/ca-certificates/1012816.pem
	I0127 15:22:09.846129 1052504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1012816.pem
	I0127 15:22:09.852644 1052504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1012816.pem /etc/ssl/certs/51391683.0"
	I0127 15:22:09.863766 1052504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10128162.pem && ln -fs /usr/share/ca-certificates/10128162.pem /etc/ssl/certs/10128162.pem"
	I0127 15:22:09.878293 1052504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10128162.pem
	I0127 15:22:09.886373 1052504 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 14:20 /usr/share/ca-certificates/10128162.pem
	I0127 15:22:09.886446 1052504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10128162.pem
	I0127 15:22:09.892310 1052504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10128162.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 15:22:09.907752 1052504 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 15:22:09.913561 1052504 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 15:22:09.913633 1052504 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-878562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-878562 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:22:09.913759 1052504 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 15:22:09.913852 1052504 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:22:09.976101 1052504 cri.go:89] found id: ""
	I0127 15:22:09.976190 1052504 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 15:22:09.989808 1052504 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:22:10.000796 1052504 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:22:10.011410 1052504 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:22:10.011441 1052504 kubeadm.go:157] found existing configuration files:
	
	I0127 15:22:10.011499 1052504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:22:10.022388 1052504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:22:10.022463 1052504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:22:10.033960 1052504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:22:10.045050 1052504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:22:10.045131 1052504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:22:10.056312 1052504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:22:10.066137 1052504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:22:10.066216 1052504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:22:10.076154 1052504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:22:10.086138 1052504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:22:10.086210 1052504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:22:10.100271 1052504 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:22:10.416306 1052504 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:24:08.746869 1052504 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 15:24:08.747074 1052504 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 15:24:08.748076 1052504 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 15:24:08.748177 1052504 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:24:08.748338 1052504 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:24:08.748568 1052504 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:24:08.748780 1052504 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 15:24:08.748946 1052504 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:24:08.750780 1052504 out.go:235]   - Generating certificates and keys ...
	I0127 15:24:08.750892 1052504 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:24:08.750985 1052504 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:24:08.751090 1052504 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 15:24:08.751176 1052504 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 15:24:08.751282 1052504 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 15:24:08.751361 1052504 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 15:24:08.751490 1052504 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 15:24:08.751718 1052504 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-878562 localhost] and IPs [192.168.50.193 127.0.0.1 ::1]
	I0127 15:24:08.751802 1052504 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 15:24:08.751952 1052504 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-878562 localhost] and IPs [192.168.50.193 127.0.0.1 ::1]
	I0127 15:24:08.752056 1052504 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 15:24:08.752174 1052504 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 15:24:08.752260 1052504 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 15:24:08.752347 1052504 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:24:08.752426 1052504 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:24:08.752520 1052504 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:24:08.752628 1052504 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:24:08.752689 1052504 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:24:08.752782 1052504 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:24:08.752895 1052504 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:24:08.752961 1052504 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:24:08.753088 1052504 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:24:08.755283 1052504 out.go:235]   - Booting up control plane ...
	I0127 15:24:08.755399 1052504 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:24:08.755521 1052504 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:24:08.755636 1052504 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:24:08.755765 1052504 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:24:08.755965 1052504 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 15:24:08.756031 1052504 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 15:24:08.756129 1052504 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:24:08.756377 1052504 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:24:08.756489 1052504 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:24:08.756699 1052504 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:24:08.756805 1052504 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:24:08.757057 1052504 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:24:08.757159 1052504 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:24:08.757408 1052504 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:24:08.757499 1052504 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:24:08.757680 1052504 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:24:08.757690 1052504 kubeadm.go:310] 
	I0127 15:24:08.757744 1052504 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 15:24:08.757812 1052504 kubeadm.go:310] 		timed out waiting for the condition
	I0127 15:24:08.757829 1052504 kubeadm.go:310] 
	I0127 15:24:08.757874 1052504 kubeadm.go:310] 	This error is likely caused by:
	I0127 15:24:08.757929 1052504 kubeadm.go:310] 		- The kubelet is not running
	I0127 15:24:08.758114 1052504 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 15:24:08.758129 1052504 kubeadm.go:310] 
	I0127 15:24:08.758247 1052504 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 15:24:08.758300 1052504 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 15:24:08.758341 1052504 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 15:24:08.758353 1052504 kubeadm.go:310] 
	I0127 15:24:08.758440 1052504 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 15:24:08.758509 1052504 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 15:24:08.758515 1052504 kubeadm.go:310] 
	I0127 15:24:08.758648 1052504 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 15:24:08.758737 1052504 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 15:24:08.758798 1052504 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 15:24:08.758870 1052504 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 15:24:08.758933 1052504 kubeadm.go:310] 
	W0127 15:24:08.759032 1052504 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-878562 localhost] and IPs [192.168.50.193 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-878562 localhost] and IPs [192.168.50.193 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-878562 localhost] and IPs [192.168.50.193 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-878562 localhost] and IPs [192.168.50.193 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 15:24:08.759085 1052504 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:24:10.223736 1052504 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.464620526s)
	I0127 15:24:10.223830 1052504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:24:10.240058 1052504 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:24:10.250736 1052504 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:24:10.250760 1052504 kubeadm.go:157] found existing configuration files:
	
	I0127 15:24:10.250807 1052504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:24:10.260961 1052504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:24:10.261049 1052504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:24:10.272073 1052504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:24:10.282194 1052504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:24:10.282262 1052504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:24:10.292908 1052504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:24:10.303093 1052504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:24:10.303170 1052504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:24:10.312972 1052504 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:24:10.323538 1052504 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:24:10.323613 1052504 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:24:10.334169 1052504 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:24:10.570931 1052504 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:26:06.667714 1052504 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 15:26:06.667839 1052504 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 15:26:06.671220 1052504 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 15:26:06.671298 1052504 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:26:06.671389 1052504 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:26:06.671532 1052504 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:26:06.671634 1052504 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 15:26:06.671722 1052504 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:26:06.748575 1052504 out.go:235]   - Generating certificates and keys ...
	I0127 15:26:06.748711 1052504 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:26:06.748795 1052504 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:26:06.748900 1052504 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:26:06.748976 1052504 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:26:06.749072 1052504 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:26:06.749164 1052504 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:26:06.749308 1052504 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:26:06.749410 1052504 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:26:06.749530 1052504 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:26:06.749649 1052504 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:26:06.749684 1052504 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:26:06.749759 1052504 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:26:06.749827 1052504 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:26:06.749908 1052504 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:26:06.750025 1052504 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:26:06.750100 1052504 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:26:06.750299 1052504 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:26:06.750440 1052504 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:26:06.750511 1052504 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:26:06.750655 1052504 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:26:06.816113 1052504 out.go:235]   - Booting up control plane ...
	I0127 15:26:06.816279 1052504 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:26:06.816377 1052504 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:26:06.816467 1052504 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:26:06.816582 1052504 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:26:06.816772 1052504 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 15:26:06.816834 1052504 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 15:26:06.816918 1052504 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:26:06.817218 1052504 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:26:06.817334 1052504 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:26:06.817607 1052504 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:26:06.817708 1052504 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:26:06.817920 1052504 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:26:06.818025 1052504 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:26:06.818304 1052504 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:26:06.818422 1052504 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:26:06.818688 1052504 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:26:06.818707 1052504 kubeadm.go:310] 
	I0127 15:26:06.818774 1052504 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 15:26:06.818842 1052504 kubeadm.go:310] 		timed out waiting for the condition
	I0127 15:26:06.818859 1052504 kubeadm.go:310] 
	I0127 15:26:06.818918 1052504 kubeadm.go:310] 	This error is likely caused by:
	I0127 15:26:06.818967 1052504 kubeadm.go:310] 		- The kubelet is not running
	I0127 15:26:06.819108 1052504 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 15:26:06.819123 1052504 kubeadm.go:310] 
	I0127 15:26:06.819280 1052504 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 15:26:06.819328 1052504 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 15:26:06.819379 1052504 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 15:26:06.819388 1052504 kubeadm.go:310] 
	I0127 15:26:06.819551 1052504 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 15:26:06.819670 1052504 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 15:26:06.819682 1052504 kubeadm.go:310] 
	I0127 15:26:06.819844 1052504 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 15:26:06.820055 1052504 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 15:26:06.820154 1052504 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 15:26:06.820267 1052504 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 15:26:06.820310 1052504 kubeadm.go:310] 
	I0127 15:26:06.820351 1052504 kubeadm.go:394] duration metric: took 3m56.906725298s to StartCluster
	I0127 15:26:06.820407 1052504 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:26:06.820477 1052504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:26:06.881549 1052504 cri.go:89] found id: ""
	I0127 15:26:06.881584 1052504 logs.go:282] 0 containers: []
	W0127 15:26:06.881599 1052504 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:26:06.881608 1052504 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:26:06.881679 1052504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:26:06.919720 1052504 cri.go:89] found id: ""
	I0127 15:26:06.919755 1052504 logs.go:282] 0 containers: []
	W0127 15:26:06.919778 1052504 logs.go:284] No container was found matching "etcd"
	I0127 15:26:06.919787 1052504 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:26:06.919871 1052504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:26:06.958559 1052504 cri.go:89] found id: ""
	I0127 15:26:06.958604 1052504 logs.go:282] 0 containers: []
	W0127 15:26:06.958620 1052504 logs.go:284] No container was found matching "coredns"
	I0127 15:26:06.958630 1052504 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:26:06.958721 1052504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:26:07.006980 1052504 cri.go:89] found id: ""
	I0127 15:26:07.007017 1052504 logs.go:282] 0 containers: []
	W0127 15:26:07.007029 1052504 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:26:07.007038 1052504 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:26:07.007106 1052504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:26:07.046686 1052504 cri.go:89] found id: ""
	I0127 15:26:07.046725 1052504 logs.go:282] 0 containers: []
	W0127 15:26:07.046737 1052504 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:26:07.046746 1052504 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:26:07.046814 1052504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:26:07.097329 1052504 cri.go:89] found id: ""
	I0127 15:26:07.097362 1052504 logs.go:282] 0 containers: []
	W0127 15:26:07.097374 1052504 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:26:07.097382 1052504 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:26:07.097447 1052504 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:26:07.147384 1052504 cri.go:89] found id: ""
	I0127 15:26:07.147410 1052504 logs.go:282] 0 containers: []
	W0127 15:26:07.147418 1052504 logs.go:284] No container was found matching "kindnet"
	I0127 15:26:07.147430 1052504 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:26:07.147444 1052504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:26:07.287895 1052504 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:26:07.287924 1052504 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:26:07.287938 1052504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:26:07.399777 1052504 logs.go:123] Gathering logs for container status ...
	I0127 15:26:07.399826 1052504 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:26:07.446551 1052504 logs.go:123] Gathering logs for kubelet ...
	I0127 15:26:07.446589 1052504 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:26:07.500870 1052504 logs.go:123] Gathering logs for dmesg ...
	I0127 15:26:07.500903 1052504 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0127 15:26:07.517956 1052504 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 15:26:07.518047 1052504 out.go:270] * 
	* 
	W0127 15:26:07.518119 1052504 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 15:26:07.518138 1052504 out.go:270] * 
	* 
	W0127 15:26:07.519224 1052504 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 15:26:07.522725 1052504 out.go:201] 
	W0127 15:26:07.523983 1052504 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 15:26:07.524043 1052504 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 15:26:07.524069 1052504 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 15:26:07.525650 1052504 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-878562 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-878562
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-878562: (1.648239132s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-878562 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-878562 status --format={{.Host}}: exit status 7 (77.99908ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-878562 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-878562 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.528588144s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-878562 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-878562 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-878562 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (99.433683ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-878562] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20321
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-878562
	    minikube start -p kubernetes-upgrade-878562 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8785622 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-878562 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-878562 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-878562 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.109534504s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-01-27 15:28:06.129487143 +0000 UTC m=+4957.802281315
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-878562 -n kubernetes-upgrade-878562
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-878562 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-878562 logs -n 25: (2.039083249s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-539934         | NoKubernetes-539934       | jenkins | v1.35.0 | 27 Jan 25 15:26 UTC | 27 Jan 25 15:27 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878562   | kubernetes-upgrade-878562 | jenkins | v1.35.0 | 27 Jan 25 15:27 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-878562   | kubernetes-upgrade-878562 | jenkins | v1.35.0 | 27 Jan 25 15:27 UTC | 27 Jan 25 15:28 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-539934         | NoKubernetes-539934       | jenkins | v1.35.0 | 27 Jan 25 15:27 UTC | 27 Jan 25 15:27 UTC |
	| start   | -p NoKubernetes-539934         | NoKubernetes-539934       | jenkins | v1.35.0 | 27 Jan 25 15:27 UTC | 27 Jan 25 15:27 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p auto-230388 pgrep -a        | auto-230388               | jenkins | v1.35.0 | 27 Jan 25 15:27 UTC | 27 Jan 25 15:27 UTC |
	|         | kubelet                        |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-539934 sudo    | NoKubernetes-539934       | jenkins | v1.35.0 | 27 Jan 25 15:27 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| ssh     | -p auto-230388 sudo cat        | auto-230388               | jenkins | v1.35.0 | 27 Jan 25 15:28 UTC | 27 Jan 25 15:28 UTC |
	|         | /etc/nsswitch.conf             |                           |         |         |                     |                     |
	| ssh     | -p auto-230388 sudo cat        | auto-230388               | jenkins | v1.35.0 | 27 Jan 25 15:28 UTC | 27 Jan 25 15:28 UTC |
	|         | /etc/hosts                     |                           |         |         |                     |                     |
	| ssh     | -p auto-230388 sudo cat        | auto-230388               | jenkins | v1.35.0 | 27 Jan 25 15:28 UTC | 27 Jan 25 15:28 UTC |
	|         | /etc/resolv.conf               |                           |         |         |                     |                     |
	| ssh     | -p auto-230388 sudo crictl     | auto-230388               | jenkins | v1.35.0 | 27 Jan 25 15:28 UTC | 27 Jan 25 15:28 UTC |
	|         | pods                           |                           |         |         |                     |                     |
	| ssh     | -p auto-230388 sudo crictl ps  | auto-230388               | jenkins | v1.35.0 | 27 Jan 25 15:28 UTC | 27 Jan 25 15:28 UTC |
	|         | --all                          |                           |         |         |                     |                     |
	| ssh     | -p auto-230388 sudo find       | auto-230388               | jenkins | v1.35.0 | 27 Jan 25 15:28 UTC | 27 Jan 25 15:28 UTC |
	|         | /etc/cni -type f -exec sh -c   |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;           |                           |         |         |                     |                     |
	| ssh     | -p auto-230388 sudo ip a s     | auto-230388               | jenkins | v1.35.0 | 27 Jan 25 15:28 UTC | 27 Jan 25 15:28 UTC |
	| ssh     | -p auto-230388 sudo ip r s     | auto-230388               | jenkins | v1.35.0 | 27 Jan 25 15:28 UTC | 27 Jan 25 15:28 UTC |
	| ssh     | -p auto-230388 sudo            | auto-230388               | jenkins | v1.35.0 | 27 Jan 25 15:28 UTC | 27 Jan 25 15:28 UTC |
	|         | iptables-save                  |                           |         |         |                     |                     |
	| ssh     | -p auto-230388 sudo iptables   | auto-230388               | jenkins | v1.35.0 | 27 Jan 25 15:28 UTC | 27 Jan 25 15:28 UTC |
	|         | -t nat -L -n -v                |                           |         |         |                     |                     |
	| ssh     | -p auto-230388 sudo systemctl  | auto-230388               | jenkins | v1.35.0 | 27 Jan 25 15:28 UTC | 27 Jan 25 15:28 UTC |
	|         | status kubelet --all --full    |                           |         |         |                     |                     |
	|         | --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p auto-230388 sudo systemctl  | auto-230388               | jenkins | v1.35.0 | 27 Jan 25 15:28 UTC | 27 Jan 25 15:28 UTC |
	|         | cat kubelet --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p auto-230388 sudo journalctl | auto-230388               | jenkins | v1.35.0 | 27 Jan 25 15:28 UTC | 27 Jan 25 15:28 UTC |
	|         | -xeu kubelet --all --full      |                           |         |         |                     |                     |
	|         | --no-pager                     |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-539934         | NoKubernetes-539934       | jenkins | v1.35.0 | 27 Jan 25 15:28 UTC |                     |
	| ssh     | -p auto-230388 sudo cat        | auto-230388               | jenkins | v1.35.0 | 27 Jan 25 15:28 UTC | 27 Jan 25 15:28 UTC |
	|         | /etc/kubernetes/kubelet.conf   |                           |         |         |                     |                     |
	| ssh     | -p auto-230388 sudo cat        | auto-230388               | jenkins | v1.35.0 | 27 Jan 25 15:28 UTC | 27 Jan 25 15:28 UTC |
	|         | /var/lib/kubelet/config.yaml   |                           |         |         |                     |                     |
	| ssh     | -p auto-230388 sudo systemctl  | auto-230388               | jenkins | v1.35.0 | 27 Jan 25 15:28 UTC |                     |
	|         | status docker --all --full     |                           |         |         |                     |                     |
	|         | --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p auto-230388 sudo systemctl  | auto-230388               | jenkins | v1.35.0 | 27 Jan 25 15:28 UTC |                     |
	|         | cat docker --no-pager          |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 15:27:21
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 15:27:21.047327 1059947 out.go:345] Setting OutFile to fd 1 ...
	I0127 15:27:21.047441 1059947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:27:21.047444 1059947 out.go:358] Setting ErrFile to fd 2...
	I0127 15:27:21.047448 1059947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:27:21.047673 1059947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 15:27:21.048311 1059947 out.go:352] Setting JSON to false
	I0127 15:27:21.049364 1059947 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":22188,"bootTime":1737969453,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 15:27:21.049423 1059947 start.go:139] virtualization: kvm guest
	I0127 15:27:21.051562 1059947 out.go:177] * [NoKubernetes-539934] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 15:27:21.053138 1059947 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 15:27:21.053141 1059947 notify.go:220] Checking for updates...
	I0127 15:27:21.054626 1059947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 15:27:21.055937 1059947 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:27:21.057280 1059947 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:27:21.059785 1059947 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 15:27:21.061768 1059947 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 15:27:21.065923 1059947 config.go:182] Loaded profile config "auto-230388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:27:21.066020 1059947 config.go:182] Loaded profile config "cert-expiration-445777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:27:21.066097 1059947 config.go:182] Loaded profile config "kubernetes-upgrade-878562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:27:21.066125 1059947 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0127 15:27:21.066199 1059947 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 15:27:21.106073 1059947 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 15:27:21.107398 1059947 start.go:297] selected driver: kvm2
	I0127 15:27:21.107406 1059947 start.go:901] validating driver "kvm2" against <nil>
	I0127 15:27:21.107417 1059947 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 15:27:21.107689 1059947 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0127 15:27:21.107763 1059947 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:27:21.107856 1059947 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 15:27:21.124846 1059947 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 15:27:21.124886 1059947 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 15:27:21.125441 1059947 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 15:27:21.125584 1059947 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 15:27:21.125608 1059947 cni.go:84] Creating CNI manager for ""
	I0127 15:27:21.125662 1059947 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:27:21.125669 1059947 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 15:27:21.125679 1059947 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0127 15:27:21.125727 1059947 start.go:340] cluster config:
	{Name:NoKubernetes-539934 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-539934 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:27:21.125858 1059947 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:27:21.127554 1059947 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-539934
	I0127 15:27:21.128822 1059947 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0127 15:27:21.162160 1059947 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0127 15:27:21.162354 1059947 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/NoKubernetes-539934/config.json ...
	I0127 15:27:21.162395 1059947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/NoKubernetes-539934/config.json: {Name:mkad727f9c99ca3ad23b0df0a4b3cefd66f78cf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:27:21.162545 1059947 start.go:360] acquireMachinesLock for NoKubernetes-539934: {Name:mk884e36253ca066a698970989f20649e5f9cbef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 15:27:24.654738 1059947 start.go:364] duration metric: took 3.492155036s to acquireMachinesLock for "NoKubernetes-539934"
	I0127 15:27:24.654785 1059947 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-539934 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-539
934 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 15:27:24.654932 1059947 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 15:27:24.656568 1059947 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0127 15:27:24.656789 1059947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:27:24.656842 1059947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:27:24.677807 1059947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37291
	I0127 15:27:24.678319 1059947 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:27:24.678923 1059947 main.go:141] libmachine: Using API Version  1
	I0127 15:27:24.678942 1059947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:27:24.679317 1059947 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:27:24.679561 1059947 main.go:141] libmachine: (NoKubernetes-539934) Calling .GetMachineName
	I0127 15:27:24.679729 1059947 main.go:141] libmachine: (NoKubernetes-539934) Calling .DriverName
	I0127 15:27:24.679881 1059947 start.go:159] libmachine.API.Create for "NoKubernetes-539934" (driver="kvm2")
	I0127 15:27:24.679923 1059947 client.go:168] LocalClient.Create starting
	I0127 15:27:24.679956 1059947 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem
	I0127 15:27:24.679996 1059947 main.go:141] libmachine: Decoding PEM data...
	I0127 15:27:24.680014 1059947 main.go:141] libmachine: Parsing certificate...
	I0127 15:27:24.680106 1059947 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem
	I0127 15:27:24.680126 1059947 main.go:141] libmachine: Decoding PEM data...
	I0127 15:27:24.680142 1059947 main.go:141] libmachine: Parsing certificate...
	I0127 15:27:24.680160 1059947 main.go:141] libmachine: Running pre-create checks...
	I0127 15:27:24.680173 1059947 main.go:141] libmachine: (NoKubernetes-539934) Calling .PreCreateCheck
	I0127 15:27:24.680617 1059947 main.go:141] libmachine: (NoKubernetes-539934) Calling .GetConfigRaw
	I0127 15:27:24.681173 1059947 main.go:141] libmachine: Creating machine...
	I0127 15:27:24.681183 1059947 main.go:141] libmachine: (NoKubernetes-539934) Calling .Create
	I0127 15:27:24.681363 1059947 main.go:141] libmachine: (NoKubernetes-539934) creating KVM machine...
	I0127 15:27:24.681372 1059947 main.go:141] libmachine: (NoKubernetes-539934) creating network...
	I0127 15:27:24.682587 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | found existing default KVM network
	I0127 15:27:24.684759 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | I0127 15:27:24.684570 1059969 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014e80}
	I0127 15:27:24.684802 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | created network xml: 
	I0127 15:27:24.684811 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | <network>
	I0127 15:27:24.684820 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG |   <name>mk-NoKubernetes-539934</name>
	I0127 15:27:24.684827 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG |   <dns enable='no'/>
	I0127 15:27:24.684834 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG |   
	I0127 15:27:24.684841 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0127 15:27:24.684848 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG |     <dhcp>
	I0127 15:27:24.684855 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0127 15:27:24.684864 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG |     </dhcp>
	I0127 15:27:24.684868 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG |   </ip>
	I0127 15:27:24.684875 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG |   
	I0127 15:27:24.684886 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | </network>
	I0127 15:27:24.684895 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | 
	I0127 15:27:24.690075 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | trying to create private KVM network mk-NoKubernetes-539934 192.168.39.0/24...
	I0127 15:27:24.778161 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | private KVM network mk-NoKubernetes-539934 192.168.39.0/24 created
	I0127 15:27:24.778195 1059947 main.go:141] libmachine: (NoKubernetes-539934) setting up store path in /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/NoKubernetes-539934 ...
	I0127 15:27:24.778219 1059947 main.go:141] libmachine: (NoKubernetes-539934) building disk image from file:///home/jenkins/minikube-integration/20321-1005652/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 15:27:24.778236 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | I0127 15:27:24.778203 1059969 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:27:24.778466 1059947 main.go:141] libmachine: (NoKubernetes-539934) Downloading /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20321-1005652/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 15:27:25.096051 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | I0127 15:27:25.095876 1059969 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/NoKubernetes-539934/id_rsa...
	I0127 15:27:25.178774 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | I0127 15:27:25.178648 1059969 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/NoKubernetes-539934/NoKubernetes-539934.rawdisk...
	I0127 15:27:25.178794 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | Writing magic tar header
	I0127 15:27:25.178806 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | Writing SSH key tar header
	I0127 15:27:25.178812 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | I0127 15:27:25.178781 1059969 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/NoKubernetes-539934 ...
	I0127 15:27:25.178886 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/NoKubernetes-539934
	I0127 15:27:25.178977 1059947 main.go:141] libmachine: (NoKubernetes-539934) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/NoKubernetes-539934 (perms=drwx------)
	I0127 15:27:25.179013 1059947 main.go:141] libmachine: (NoKubernetes-539934) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines (perms=drwxr-xr-x)
	I0127 15:27:25.179021 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines
	I0127 15:27:25.179033 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:27:25.179041 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652
	I0127 15:27:25.179050 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 15:27:25.179057 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | checking permissions on dir: /home/jenkins
	I0127 15:27:25.179068 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | checking permissions on dir: /home
	I0127 15:27:25.179074 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | skipping /home - not owner
	I0127 15:27:25.179083 1059947 main.go:141] libmachine: (NoKubernetes-539934) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube (perms=drwxr-xr-x)
	I0127 15:27:25.179110 1059947 main.go:141] libmachine: (NoKubernetes-539934) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652 (perms=drwxrwxr-x)
	I0127 15:27:25.179119 1059947 main.go:141] libmachine: (NoKubernetes-539934) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 15:27:25.179125 1059947 main.go:141] libmachine: (NoKubernetes-539934) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 15:27:25.179134 1059947 main.go:141] libmachine: (NoKubernetes-539934) creating domain...
	I0127 15:27:25.180390 1059947 main.go:141] libmachine: (NoKubernetes-539934) define libvirt domain using xml: 
	I0127 15:27:25.180406 1059947 main.go:141] libmachine: (NoKubernetes-539934) <domain type='kvm'>
	I0127 15:27:25.180415 1059947 main.go:141] libmachine: (NoKubernetes-539934)   <name>NoKubernetes-539934</name>
	I0127 15:27:25.180426 1059947 main.go:141] libmachine: (NoKubernetes-539934)   <memory unit='MiB'>6000</memory>
	I0127 15:27:25.180437 1059947 main.go:141] libmachine: (NoKubernetes-539934)   <vcpu>2</vcpu>
	I0127 15:27:25.180443 1059947 main.go:141] libmachine: (NoKubernetes-539934)   <features>
	I0127 15:27:25.180449 1059947 main.go:141] libmachine: (NoKubernetes-539934)     <acpi/>
	I0127 15:27:25.180455 1059947 main.go:141] libmachine: (NoKubernetes-539934)     <apic/>
	I0127 15:27:25.180462 1059947 main.go:141] libmachine: (NoKubernetes-539934)     <pae/>
	I0127 15:27:25.180467 1059947 main.go:141] libmachine: (NoKubernetes-539934)     
	I0127 15:27:25.180473 1059947 main.go:141] libmachine: (NoKubernetes-539934)   </features>
	I0127 15:27:25.180479 1059947 main.go:141] libmachine: (NoKubernetes-539934)   <cpu mode='host-passthrough'>
	I0127 15:27:25.180484 1059947 main.go:141] libmachine: (NoKubernetes-539934)   
	I0127 15:27:25.180490 1059947 main.go:141] libmachine: (NoKubernetes-539934)   </cpu>
	I0127 15:27:25.180496 1059947 main.go:141] libmachine: (NoKubernetes-539934)   <os>
	I0127 15:27:25.180502 1059947 main.go:141] libmachine: (NoKubernetes-539934)     <type>hvm</type>
	I0127 15:27:25.180510 1059947 main.go:141] libmachine: (NoKubernetes-539934)     <boot dev='cdrom'/>
	I0127 15:27:25.180515 1059947 main.go:141] libmachine: (NoKubernetes-539934)     <boot dev='hd'/>
	I0127 15:27:25.180522 1059947 main.go:141] libmachine: (NoKubernetes-539934)     <bootmenu enable='no'/>
	I0127 15:27:25.180527 1059947 main.go:141] libmachine: (NoKubernetes-539934)   </os>
	I0127 15:27:25.180534 1059947 main.go:141] libmachine: (NoKubernetes-539934)   <devices>
	I0127 15:27:25.180540 1059947 main.go:141] libmachine: (NoKubernetes-539934)     <disk type='file' device='cdrom'>
	I0127 15:27:25.180552 1059947 main.go:141] libmachine: (NoKubernetes-539934)       <source file='/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/NoKubernetes-539934/boot2docker.iso'/>
	I0127 15:27:25.180559 1059947 main.go:141] libmachine: (NoKubernetes-539934)       <target dev='hdc' bus='scsi'/>
	I0127 15:27:25.180565 1059947 main.go:141] libmachine: (NoKubernetes-539934)       <readonly/>
	I0127 15:27:25.180571 1059947 main.go:141] libmachine: (NoKubernetes-539934)     </disk>
	I0127 15:27:25.180598 1059947 main.go:141] libmachine: (NoKubernetes-539934)     <disk type='file' device='disk'>
	I0127 15:27:25.180615 1059947 main.go:141] libmachine: (NoKubernetes-539934)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 15:27:25.180632 1059947 main.go:141] libmachine: (NoKubernetes-539934)       <source file='/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/NoKubernetes-539934/NoKubernetes-539934.rawdisk'/>
	I0127 15:27:25.180660 1059947 main.go:141] libmachine: (NoKubernetes-539934)       <target dev='hda' bus='virtio'/>
	I0127 15:27:25.180668 1059947 main.go:141] libmachine: (NoKubernetes-539934)     </disk>
	I0127 15:27:25.180682 1059947 main.go:141] libmachine: (NoKubernetes-539934)     <interface type='network'>
	I0127 15:27:25.180693 1059947 main.go:141] libmachine: (NoKubernetes-539934)       <source network='mk-NoKubernetes-539934'/>
	I0127 15:27:25.180699 1059947 main.go:141] libmachine: (NoKubernetes-539934)       <model type='virtio'/>
	I0127 15:27:25.180705 1059947 main.go:141] libmachine: (NoKubernetes-539934)     </interface>
	I0127 15:27:25.180738 1059947 main.go:141] libmachine: (NoKubernetes-539934)     <interface type='network'>
	I0127 15:27:25.180746 1059947 main.go:141] libmachine: (NoKubernetes-539934)       <source network='default'/>
	I0127 15:27:25.180756 1059947 main.go:141] libmachine: (NoKubernetes-539934)       <model type='virtio'/>
	I0127 15:27:25.180763 1059947 main.go:141] libmachine: (NoKubernetes-539934)     </interface>
	I0127 15:27:25.180769 1059947 main.go:141] libmachine: (NoKubernetes-539934)     <serial type='pty'>
	I0127 15:27:25.180776 1059947 main.go:141] libmachine: (NoKubernetes-539934)       <target port='0'/>
	I0127 15:27:25.180781 1059947 main.go:141] libmachine: (NoKubernetes-539934)     </serial>
	I0127 15:27:25.180787 1059947 main.go:141] libmachine: (NoKubernetes-539934)     <console type='pty'>
	I0127 15:27:25.180792 1059947 main.go:141] libmachine: (NoKubernetes-539934)       <target type='serial' port='0'/>
	I0127 15:27:25.180799 1059947 main.go:141] libmachine: (NoKubernetes-539934)     </console>
	I0127 15:27:25.180803 1059947 main.go:141] libmachine: (NoKubernetes-539934)     <rng model='virtio'>
	I0127 15:27:25.180811 1059947 main.go:141] libmachine: (NoKubernetes-539934)       <backend model='random'>/dev/random</backend>
	I0127 15:27:25.180816 1059947 main.go:141] libmachine: (NoKubernetes-539934)     </rng>
	I0127 15:27:25.180823 1059947 main.go:141] libmachine: (NoKubernetes-539934)     
	I0127 15:27:25.180827 1059947 main.go:141] libmachine: (NoKubernetes-539934)     
	I0127 15:27:25.180865 1059947 main.go:141] libmachine: (NoKubernetes-539934)   </devices>
	I0127 15:27:25.180882 1059947 main.go:141] libmachine: (NoKubernetes-539934) </domain>
	I0127 15:27:25.180900 1059947 main.go:141] libmachine: (NoKubernetes-539934) 
	I0127 15:27:25.185306 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | domain NoKubernetes-539934 has defined MAC address 52:54:00:cb:f1:d5 in network default
	I0127 15:27:25.185989 1059947 main.go:141] libmachine: (NoKubernetes-539934) starting domain...
	I0127 15:27:25.186006 1059947 main.go:141] libmachine: (NoKubernetes-539934) ensuring networks are active...
	I0127 15:27:25.186017 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | domain NoKubernetes-539934 has defined MAC address 52:54:00:22:2f:c5 in network mk-NoKubernetes-539934
	I0127 15:27:25.186824 1059947 main.go:141] libmachine: (NoKubernetes-539934) Ensuring network default is active
	I0127 15:27:25.187126 1059947 main.go:141] libmachine: (NoKubernetes-539934) Ensuring network mk-NoKubernetes-539934 is active
	I0127 15:27:25.187615 1059947 main.go:141] libmachine: (NoKubernetes-539934) getting domain XML...
	I0127 15:27:25.188331 1059947 main.go:141] libmachine: (NoKubernetes-539934) creating domain...
	I0127 15:27:26.694795 1059050 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 15:27:26.694880 1059050 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:27:26.694996 1059050 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:27:26.695149 1059050 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:27:26.695285 1059050 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 15:27:26.695370 1059050 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:27:26.697057 1059050 out.go:235]   - Generating certificates and keys ...
	I0127 15:27:26.697151 1059050 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:27:26.697228 1059050 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:27:26.697347 1059050 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 15:27:26.697398 1059050 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 15:27:26.697458 1059050 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 15:27:26.697549 1059050 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 15:27:26.697631 1059050 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 15:27:26.697796 1059050 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-230388 localhost] and IPs [192.168.72.110 127.0.0.1 ::1]
	I0127 15:27:26.697890 1059050 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 15:27:26.698053 1059050 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-230388 localhost] and IPs [192.168.72.110 127.0.0.1 ::1]
	I0127 15:27:26.698179 1059050 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 15:27:26.698282 1059050 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 15:27:26.698356 1059050 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 15:27:26.698460 1059050 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:27:26.698529 1059050 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:27:26.698630 1059050 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 15:27:26.698707 1059050 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:27:26.698816 1059050 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:27:26.698906 1059050 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:27:26.699018 1059050 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:27:26.699125 1059050 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:27:26.700537 1059050 out.go:235]   - Booting up control plane ...
	I0127 15:27:26.700651 1059050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:27:26.700758 1059050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:27:26.700852 1059050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:27:26.700988 1059050 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:27:26.701151 1059050 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:27:26.701230 1059050 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:27:26.701407 1059050 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 15:27:26.701539 1059050 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 15:27:26.701622 1059050 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002356926s
	I0127 15:27:26.701717 1059050 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 15:27:26.701803 1059050 kubeadm.go:310] [api-check] The API server is healthy after 5.002103549s
	I0127 15:27:26.701963 1059050 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 15:27:26.702146 1059050 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 15:27:26.702236 1059050 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 15:27:26.702419 1059050 kubeadm.go:310] [mark-control-plane] Marking the node auto-230388 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 15:27:26.702495 1059050 kubeadm.go:310] [bootstrap-token] Using token: k3us4p.wdsj01t978sghahn
	I0127 15:27:26.703995 1059050 out.go:235]   - Configuring RBAC rules ...
	I0127 15:27:26.704118 1059050 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 15:27:26.704303 1059050 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 15:27:26.704510 1059050 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 15:27:26.704677 1059050 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 15:27:26.704816 1059050 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 15:27:26.704929 1059050 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 15:27:26.705120 1059050 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 15:27:26.705202 1059050 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 15:27:26.705275 1059050 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 15:27:26.705285 1059050 kubeadm.go:310] 
	I0127 15:27:26.705365 1059050 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 15:27:26.705381 1059050 kubeadm.go:310] 
	I0127 15:27:26.705501 1059050 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 15:27:26.705511 1059050 kubeadm.go:310] 
	I0127 15:27:26.705559 1059050 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 15:27:26.705649 1059050 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 15:27:26.705728 1059050 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 15:27:26.705744 1059050 kubeadm.go:310] 
	I0127 15:27:26.705824 1059050 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 15:27:26.705840 1059050 kubeadm.go:310] 
	I0127 15:27:26.705912 1059050 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 15:27:26.705929 1059050 kubeadm.go:310] 
	I0127 15:27:26.706015 1059050 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 15:27:26.706119 1059050 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 15:27:26.706211 1059050 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 15:27:26.706226 1059050 kubeadm.go:310] 
	I0127 15:27:26.706374 1059050 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 15:27:26.706489 1059050 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 15:27:26.706501 1059050 kubeadm.go:310] 
	I0127 15:27:26.706603 1059050 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k3us4p.wdsj01t978sghahn \
	I0127 15:27:26.706769 1059050 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 \
	I0127 15:27:26.706806 1059050 kubeadm.go:310] 	--control-plane 
	I0127 15:27:26.706814 1059050 kubeadm.go:310] 
	I0127 15:27:26.706929 1059050 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 15:27:26.706938 1059050 kubeadm.go:310] 
	I0127 15:27:26.707055 1059050 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k3us4p.wdsj01t978sghahn \
	I0127 15:27:26.707194 1059050 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 
	I0127 15:27:26.707215 1059050 cni.go:84] Creating CNI manager for ""
	I0127 15:27:26.707227 1059050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:27:26.708759 1059050 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 15:27:24.403349 1059700 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 15:27:24.403379 1059700 machine.go:96] duration metric: took 6.780118563s to provisionDockerMachine
	I0127 15:27:24.403393 1059700 start.go:293] postStartSetup for "kubernetes-upgrade-878562" (driver="kvm2")
	I0127 15:27:24.403407 1059700 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 15:27:24.403431 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .DriverName
	I0127 15:27:24.403837 1059700 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 15:27:24.403877 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHHostname
	I0127 15:27:24.407413 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:27:24.407804 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:26:37 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:27:24.407840 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:27:24.408034 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHPort
	I0127 15:27:24.408287 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:27:24.408468 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHUsername
	I0127 15:27:24.408639 1059700 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/kubernetes-upgrade-878562/id_rsa Username:docker}
	I0127 15:27:24.492598 1059700 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 15:27:24.497412 1059700 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 15:27:24.497447 1059700 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/addons for local assets ...
	I0127 15:27:24.497520 1059700 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/files for local assets ...
	I0127 15:27:24.497628 1059700 filesync.go:149] local asset: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem -> 10128162.pem in /etc/ssl/certs
	I0127 15:27:24.497749 1059700 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 15:27:24.508254 1059700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:27:24.536472 1059700 start.go:296] duration metric: took 133.060954ms for postStartSetup
	I0127 15:27:24.536528 1059700 fix.go:56] duration metric: took 6.94184689s for fixHost
	I0127 15:27:24.536572 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHHostname
	I0127 15:27:24.539863 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:27:24.540277 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:26:37 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:27:24.540312 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:27:24.540604 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHPort
	I0127 15:27:24.540835 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:27:24.541045 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:27:24.541214 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHUsername
	I0127 15:27:24.541441 1059700 main.go:141] libmachine: Using SSH client type: native
	I0127 15:27:24.541697 1059700 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0127 15:27:24.541717 1059700 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 15:27:24.654570 1059700 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737991644.612086575
	
	I0127 15:27:24.654596 1059700 fix.go:216] guest clock: 1737991644.612086575
	I0127 15:27:24.654603 1059700 fix.go:229] Guest: 2025-01-27 15:27:24.612086575 +0000 UTC Remote: 2025-01-27 15:27:24.536533987 +0000 UTC m=+20.511979466 (delta=75.552588ms)
	I0127 15:27:24.654629 1059700 fix.go:200] guest clock delta is within tolerance: 75.552588ms
	I0127 15:27:24.654636 1059700 start.go:83] releasing machines lock for "kubernetes-upgrade-878562", held for 7.05999085s
	I0127 15:27:24.654669 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .DriverName
	I0127 15:27:24.655033 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetIP
	I0127 15:27:24.658547 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:27:24.659029 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:26:37 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:27:24.659062 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:27:24.659272 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .DriverName
	I0127 15:27:24.659950 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .DriverName
	I0127 15:27:24.660143 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .DriverName
	I0127 15:27:24.660266 1059700 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 15:27:24.660314 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHHostname
	I0127 15:27:24.660429 1059700 ssh_runner.go:195] Run: cat /version.json
	I0127 15:27:24.660462 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHHostname
	I0127 15:27:24.663514 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:27:24.663625 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:27:24.664041 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:26:37 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:27:24.664087 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:27:24.664117 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:26:37 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:27:24.664150 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:27:24.664402 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHPort
	I0127 15:27:24.664507 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHPort
	I0127 15:27:24.664682 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:27:24.664708 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHKeyPath
	I0127 15:27:24.664823 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHUsername
	I0127 15:27:24.664913 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetSSHUsername
	I0127 15:27:24.665026 1059700 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/kubernetes-upgrade-878562/id_rsa Username:docker}
	I0127 15:27:24.665096 1059700 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/kubernetes-upgrade-878562/id_rsa Username:docker}
	I0127 15:27:24.752315 1059700 ssh_runner.go:195] Run: systemctl --version
	I0127 15:27:24.792495 1059700 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 15:27:24.957710 1059700 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 15:27:24.965815 1059700 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 15:27:24.965895 1059700 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 15:27:24.979194 1059700 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 15:27:24.979223 1059700 start.go:495] detecting cgroup driver to use...
	I0127 15:27:24.979310 1059700 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 15:27:25.008867 1059700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 15:27:25.026287 1059700 docker.go:217] disabling cri-docker service (if available) ...
	I0127 15:27:25.026364 1059700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 15:27:25.045317 1059700 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 15:27:25.064827 1059700 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 15:27:25.240355 1059700 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 15:27:25.430449 1059700 docker.go:233] disabling docker service ...
	I0127 15:27:25.430534 1059700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 15:27:25.455116 1059700 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 15:27:25.476242 1059700 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 15:27:25.630874 1059700 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 15:27:25.816286 1059700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 15:27:25.831458 1059700 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 15:27:25.860351 1059700 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 15:27:25.860418 1059700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:27:25.875895 1059700 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 15:27:25.875968 1059700 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:27:25.893469 1059700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:27:25.906219 1059700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:27:25.919299 1059700 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 15:27:25.931659 1059700 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:27:25.943731 1059700 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:27:25.958108 1059700 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:27:25.970947 1059700 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 15:27:25.982016 1059700 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 15:27:25.993049 1059700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:27:26.153220 1059700 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 15:27:26.709950 1059050 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 15:27:26.722211 1059050 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 15:27:26.748300 1059050 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 15:27:26.748432 1059050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:27:26.748491 1059050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-230388 minikube.k8s.io/updated_at=2025_01_27T15_27_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743 minikube.k8s.io/name=auto-230388 minikube.k8s.io/primary=true
	I0127 15:27:26.910049 1059050 ops.go:34] apiserver oom_adj: -16
	I0127 15:27:26.910215 1059050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:27:27.410241 1059050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:27:27.910328 1059050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:27:28.411226 1059050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:27:28.910862 1059050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:27:29.410564 1059050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:27:29.910393 1059050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:27:30.410377 1059050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:27:30.911013 1059050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:27:30.995121 1059050 kubeadm.go:1113] duration metric: took 4.246774075s to wait for elevateKubeSystemPrivileges
	I0127 15:27:30.995184 1059050 kubeadm.go:394] duration metric: took 15.388506321s to StartCluster
	I0127 15:27:30.995221 1059050 settings.go:142] acquiring lock: {Name:mk76722a8563186e0a733747d866232f054026c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:27:30.995337 1059050 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:27:30.997648 1059050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:27:30.997934 1059050 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.110 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 15:27:30.998382 1059050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 15:27:30.998706 1059050 config.go:182] Loaded profile config "auto-230388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:27:30.998754 1059050 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 15:27:30.998933 1059050 addons.go:69] Setting storage-provisioner=true in profile "auto-230388"
	I0127 15:27:30.998953 1059050 addons.go:238] Setting addon storage-provisioner=true in "auto-230388"
	I0127 15:27:30.998988 1059050 host.go:66] Checking if "auto-230388" exists ...
	I0127 15:27:30.999125 1059050 addons.go:69] Setting default-storageclass=true in profile "auto-230388"
	I0127 15:27:30.999149 1059050 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-230388"
	I0127 15:27:31.000124 1059050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:27:31.000175 1059050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:27:31.000251 1059050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:27:31.000314 1059050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:27:31.000685 1059050 out.go:177] * Verifying Kubernetes components...
	I0127 15:27:26.585701 1059947 main.go:141] libmachine: (NoKubernetes-539934) waiting for IP...
	I0127 15:27:26.586392 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | domain NoKubernetes-539934 has defined MAC address 52:54:00:22:2f:c5 in network mk-NoKubernetes-539934
	I0127 15:27:26.586832 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | unable to find current IP address of domain NoKubernetes-539934 in network mk-NoKubernetes-539934
	I0127 15:27:26.586895 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | I0127 15:27:26.586817 1059969 retry.go:31] will retry after 249.648285ms: waiting for domain to come up
	I0127 15:27:26.838754 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | domain NoKubernetes-539934 has defined MAC address 52:54:00:22:2f:c5 in network mk-NoKubernetes-539934
	I0127 15:27:26.839398 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | unable to find current IP address of domain NoKubernetes-539934 in network mk-NoKubernetes-539934
	I0127 15:27:26.839423 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | I0127 15:27:26.839347 1059969 retry.go:31] will retry after 340.739355ms: waiting for domain to come up
	I0127 15:27:27.181896 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | domain NoKubernetes-539934 has defined MAC address 52:54:00:22:2f:c5 in network mk-NoKubernetes-539934
	I0127 15:27:27.182418 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | unable to find current IP address of domain NoKubernetes-539934 in network mk-NoKubernetes-539934
	I0127 15:27:27.182438 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | I0127 15:27:27.182397 1059969 retry.go:31] will retry after 437.615065ms: waiting for domain to come up
	I0127 15:27:27.621995 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | domain NoKubernetes-539934 has defined MAC address 52:54:00:22:2f:c5 in network mk-NoKubernetes-539934
	I0127 15:27:27.622457 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | unable to find current IP address of domain NoKubernetes-539934 in network mk-NoKubernetes-539934
	I0127 15:27:27.622502 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | I0127 15:27:27.622422 1059969 retry.go:31] will retry after 418.82203ms: waiting for domain to come up
	I0127 15:27:28.043145 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | domain NoKubernetes-539934 has defined MAC address 52:54:00:22:2f:c5 in network mk-NoKubernetes-539934
	I0127 15:27:28.043793 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | unable to find current IP address of domain NoKubernetes-539934 in network mk-NoKubernetes-539934
	I0127 15:27:28.043810 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | I0127 15:27:28.043770 1059969 retry.go:31] will retry after 502.435264ms: waiting for domain to come up
	I0127 15:27:28.547400 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | domain NoKubernetes-539934 has defined MAC address 52:54:00:22:2f:c5 in network mk-NoKubernetes-539934
	I0127 15:27:28.547867 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | unable to find current IP address of domain NoKubernetes-539934 in network mk-NoKubernetes-539934
	I0127 15:27:28.547883 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | I0127 15:27:28.547842 1059969 retry.go:31] will retry after 615.808035ms: waiting for domain to come up
	I0127 15:27:29.165471 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | domain NoKubernetes-539934 has defined MAC address 52:54:00:22:2f:c5 in network mk-NoKubernetes-539934
	I0127 15:27:29.165949 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | unable to find current IP address of domain NoKubernetes-539934 in network mk-NoKubernetes-539934
	I0127 15:27:29.165968 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | I0127 15:27:29.165887 1059969 retry.go:31] will retry after 920.006225ms: waiting for domain to come up
	I0127 15:27:30.087357 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | domain NoKubernetes-539934 has defined MAC address 52:54:00:22:2f:c5 in network mk-NoKubernetes-539934
	I0127 15:27:30.087857 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | unable to find current IP address of domain NoKubernetes-539934 in network mk-NoKubernetes-539934
	I0127 15:27:30.087907 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | I0127 15:27:30.087820 1059969 retry.go:31] will retry after 1.419535336s: waiting for domain to come up
	I0127 15:27:31.002682 1059050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:27:31.022330 1059050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44317
	I0127 15:27:31.022416 1059050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46377
	I0127 15:27:31.022989 1059050 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:27:31.023053 1059050 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:27:31.023564 1059050 main.go:141] libmachine: Using API Version  1
	I0127 15:27:31.023583 1059050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:27:31.023719 1059050 main.go:141] libmachine: Using API Version  1
	I0127 15:27:31.023746 1059050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:27:31.024020 1059050 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:27:31.024129 1059050 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:27:31.024213 1059050 main.go:141] libmachine: (auto-230388) Calling .GetState
	I0127 15:27:31.024728 1059050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:27:31.024786 1059050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:27:31.028224 1059050 addons.go:238] Setting addon default-storageclass=true in "auto-230388"
	I0127 15:27:31.028274 1059050 host.go:66] Checking if "auto-230388" exists ...
	I0127 15:27:31.028664 1059050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:27:31.028709 1059050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:27:31.044615 1059050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I0127 15:27:31.045179 1059050 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:27:31.045685 1059050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41825
	I0127 15:27:31.045787 1059050 main.go:141] libmachine: Using API Version  1
	I0127 15:27:31.045808 1059050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:27:31.046141 1059050 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:27:31.046281 1059050 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:27:31.046695 1059050 main.go:141] libmachine: Using API Version  1
	I0127 15:27:31.046724 1059050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:27:31.046902 1059050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:27:31.046947 1059050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:27:31.047145 1059050 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:27:31.047410 1059050 main.go:141] libmachine: (auto-230388) Calling .GetState
	I0127 15:27:31.049805 1059050 main.go:141] libmachine: (auto-230388) Calling .DriverName
	I0127 15:27:31.051577 1059050 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:27:31.052975 1059050 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:27:31.052992 1059050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 15:27:31.053053 1059050 main.go:141] libmachine: (auto-230388) Calling .GetSSHHostname
	I0127 15:27:31.057495 1059050 main.go:141] libmachine: (auto-230388) DBG | domain auto-230388 has defined MAC address 52:54:00:3c:ea:88 in network mk-auto-230388
	I0127 15:27:31.057912 1059050 main.go:141] libmachine: (auto-230388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:ea:88", ip: ""} in network mk-auto-230388: {Iface:virbr1 ExpiryTime:2025-01-27 16:27:01 +0000 UTC Type:0 Mac:52:54:00:3c:ea:88 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:auto-230388 Clientid:01:52:54:00:3c:ea:88}
	I0127 15:27:31.057943 1059050 main.go:141] libmachine: (auto-230388) DBG | domain auto-230388 has defined IP address 192.168.72.110 and MAC address 52:54:00:3c:ea:88 in network mk-auto-230388
	I0127 15:27:31.058336 1059050 main.go:141] libmachine: (auto-230388) Calling .GetSSHPort
	I0127 15:27:31.058535 1059050 main.go:141] libmachine: (auto-230388) Calling .GetSSHKeyPath
	I0127 15:27:31.058680 1059050 main.go:141] libmachine: (auto-230388) Calling .GetSSHUsername
	I0127 15:27:31.058816 1059050 sshutil.go:53] new ssh client: &{IP:192.168.72.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/auto-230388/id_rsa Username:docker}
	I0127 15:27:31.068638 1059050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39303
	I0127 15:27:31.069144 1059050 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:27:31.069730 1059050 main.go:141] libmachine: Using API Version  1
	I0127 15:27:31.069751 1059050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:27:31.070098 1059050 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:27:31.070307 1059050 main.go:141] libmachine: (auto-230388) Calling .GetState
	I0127 15:27:31.072274 1059050 main.go:141] libmachine: (auto-230388) Calling .DriverName
	I0127 15:27:31.072548 1059050 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 15:27:31.072569 1059050 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 15:27:31.072593 1059050 main.go:141] libmachine: (auto-230388) Calling .GetSSHHostname
	I0127 15:27:31.076097 1059050 main.go:141] libmachine: (auto-230388) DBG | domain auto-230388 has defined MAC address 52:54:00:3c:ea:88 in network mk-auto-230388
	I0127 15:27:31.076609 1059050 main.go:141] libmachine: (auto-230388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:ea:88", ip: ""} in network mk-auto-230388: {Iface:virbr1 ExpiryTime:2025-01-27 16:27:01 +0000 UTC Type:0 Mac:52:54:00:3c:ea:88 Iaid: IPaddr:192.168.72.110 Prefix:24 Hostname:auto-230388 Clientid:01:52:54:00:3c:ea:88}
	I0127 15:27:31.076633 1059050 main.go:141] libmachine: (auto-230388) DBG | domain auto-230388 has defined IP address 192.168.72.110 and MAC address 52:54:00:3c:ea:88 in network mk-auto-230388
	I0127 15:27:31.076789 1059050 main.go:141] libmachine: (auto-230388) Calling .GetSSHPort
	I0127 15:27:31.076930 1059050 main.go:141] libmachine: (auto-230388) Calling .GetSSHKeyPath
	I0127 15:27:31.077055 1059050 main.go:141] libmachine: (auto-230388) Calling .GetSSHUsername
	I0127 15:27:31.077268 1059050 sshutil.go:53] new ssh client: &{IP:192.168.72.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/auto-230388/id_rsa Username:docker}
	I0127 15:27:31.225023 1059050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 15:27:31.260855 1059050 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:27:31.511224 1059050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 15:27:31.521679 1059050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:27:31.889803 1059050 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0127 15:27:31.891310 1059050 node_ready.go:35] waiting up to 15m0s for node "auto-230388" to be "Ready" ...
	I0127 15:27:31.910381 1059050 node_ready.go:49] node "auto-230388" has status "Ready":"True"
	I0127 15:27:31.910416 1059050 node_ready.go:38] duration metric: took 19.073948ms for node "auto-230388" to be "Ready" ...
	I0127 15:27:31.910431 1059050 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:27:31.945171 1059050 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-dtgnk" in "kube-system" namespace to be "Ready" ...
	I0127 15:27:32.048162 1059050 main.go:141] libmachine: Making call to close driver server
	I0127 15:27:32.048199 1059050 main.go:141] libmachine: (auto-230388) Calling .Close
	I0127 15:27:32.048532 1059050 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:27:32.048553 1059050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:27:32.048561 1059050 main.go:141] libmachine: Making call to close driver server
	I0127 15:27:32.048568 1059050 main.go:141] libmachine: (auto-230388) Calling .Close
	I0127 15:27:32.048845 1059050 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:27:32.048871 1059050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:27:32.057649 1059050 main.go:141] libmachine: Making call to close driver server
	I0127 15:27:32.057676 1059050 main.go:141] libmachine: (auto-230388) Calling .Close
	I0127 15:27:32.058037 1059050 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:27:32.058061 1059050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:27:32.058058 1059050 main.go:141] libmachine: (auto-230388) DBG | Closing plugin on server side
	I0127 15:27:32.395287 1059050 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-230388" context rescaled to 1 replicas
	I0127 15:27:32.607228 1059050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.085500438s)
	I0127 15:27:32.607304 1059050 main.go:141] libmachine: Making call to close driver server
	I0127 15:27:32.607320 1059050 main.go:141] libmachine: (auto-230388) Calling .Close
	I0127 15:27:32.607649 1059050 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:27:32.607667 1059050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:27:32.607677 1059050 main.go:141] libmachine: Making call to close driver server
	I0127 15:27:32.607692 1059050 main.go:141] libmachine: (auto-230388) Calling .Close
	I0127 15:27:32.607952 1059050 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:27:32.608011 1059050 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:27:32.609667 1059050 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0127 15:27:33.162984 1059700 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.009714622s)
	I0127 15:27:33.163041 1059700 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 15:27:33.163108 1059700 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 15:27:33.169157 1059700 start.go:563] Will wait 60s for crictl version
	I0127 15:27:33.169235 1059700 ssh_runner.go:195] Run: which crictl
	I0127 15:27:33.174891 1059700 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 15:27:33.212502 1059700 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 15:27:33.212615 1059700 ssh_runner.go:195] Run: crio --version
	I0127 15:27:33.249314 1059700 ssh_runner.go:195] Run: crio --version
	I0127 15:27:33.283885 1059700 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 15:27:33.285408 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) Calling .GetIP
	I0127 15:27:33.289174 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:27:33.289680 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:1e:51", ip: ""} in network mk-kubernetes-upgrade-878562: {Iface:virbr2 ExpiryTime:2025-01-27 16:26:37 +0000 UTC Type:0 Mac:52:54:00:32:1e:51 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:kubernetes-upgrade-878562 Clientid:01:52:54:00:32:1e:51}
	I0127 15:27:33.289712 1059700 main.go:141] libmachine: (kubernetes-upgrade-878562) DBG | domain kubernetes-upgrade-878562 has defined IP address 192.168.50.193 and MAC address 52:54:00:32:1e:51 in network mk-kubernetes-upgrade-878562
	I0127 15:27:33.290108 1059700 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 15:27:33.295268 1059700 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-878562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-878562 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 15:27:33.295420 1059700 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 15:27:33.295495 1059700 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:27:33.352608 1059700 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 15:27:33.352632 1059700 crio.go:433] Images already preloaded, skipping extraction
	I0127 15:27:33.352683 1059700 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:27:33.395830 1059700 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 15:27:33.395858 1059700 cache_images.go:84] Images are preloaded, skipping loading
	I0127 15:27:33.395869 1059700 kubeadm.go:934] updating node { 192.168.50.193 8443 v1.32.1 crio true true} ...
	I0127 15:27:33.396002 1059700 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-878562 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-878562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 15:27:33.396087 1059700 ssh_runner.go:195] Run: crio config
	I0127 15:27:33.452360 1059700 cni.go:84] Creating CNI manager for ""
	I0127 15:27:33.452391 1059700 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:27:33.452405 1059700 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 15:27:33.452439 1059700 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.193 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-878562 NodeName:kubernetes-upgrade-878562 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 15:27:33.452622 1059700 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-878562"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.193"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.193"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 15:27:33.452706 1059700 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 15:27:33.467463 1059700 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 15:27:33.467555 1059700 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 15:27:33.478632 1059700 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0127 15:27:33.500282 1059700 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 15:27:33.521906 1059700 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0127 15:27:33.543423 1059700 ssh_runner.go:195] Run: grep 192.168.50.193	control-plane.minikube.internal$ /etc/hosts
	I0127 15:27:33.548349 1059700 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:27:33.696270 1059700 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:27:33.711959 1059700 certs.go:68] Setting up /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562 for IP: 192.168.50.193
	I0127 15:27:33.711987 1059700 certs.go:194] generating shared ca certs ...
	I0127 15:27:33.712011 1059700 certs.go:226] acquiring lock for ca certs: {Name:mk0f815247e4bef2bde3ddcae95639d1cd9cab24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:27:33.712237 1059700 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key
	I0127 15:27:33.712293 1059700 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key
	I0127 15:27:33.712308 1059700 certs.go:256] generating profile certs ...
	I0127 15:27:33.712420 1059700 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/client.key
	I0127 15:27:33.712486 1059700 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/apiserver.key.b9edc8c3
	I0127 15:27:33.712536 1059700 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/proxy-client.key
	I0127 15:27:33.712684 1059700 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem (1338 bytes)
	W0127 15:27:33.712719 1059700 certs.go:480] ignoring /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816_empty.pem, impossibly tiny 0 bytes
	I0127 15:27:33.712733 1059700 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 15:27:33.712771 1059700 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem (1078 bytes)
	I0127 15:27:33.712806 1059700 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem (1123 bytes)
	I0127 15:27:33.712838 1059700 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem (1679 bytes)
	I0127 15:27:33.712891 1059700 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:27:33.713740 1059700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 15:27:33.746235 1059700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 15:27:33.775326 1059700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 15:27:33.801688 1059700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 15:27:33.827706 1059700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0127 15:27:33.853175 1059700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 15:27:33.917408 1059700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 15:27:33.962986 1059700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kubernetes-upgrade-878562/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 15:27:34.014406 1059700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem --> /usr/share/ca-certificates/1012816.pem (1338 bytes)
	I0127 15:27:31.509420 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | domain NoKubernetes-539934 has defined MAC address 52:54:00:22:2f:c5 in network mk-NoKubernetes-539934
	I0127 15:27:31.509789 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | unable to find current IP address of domain NoKubernetes-539934 in network mk-NoKubernetes-539934
	I0127 15:27:31.509813 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | I0127 15:27:31.509730 1059969 retry.go:31] will retry after 1.435928122s: waiting for domain to come up
	I0127 15:27:32.948296 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | domain NoKubernetes-539934 has defined MAC address 52:54:00:22:2f:c5 in network mk-NoKubernetes-539934
	I0127 15:27:32.949333 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | unable to find current IP address of domain NoKubernetes-539934 in network mk-NoKubernetes-539934
	I0127 15:27:32.949382 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | I0127 15:27:32.949312 1059969 retry.go:31] will retry after 2.31395358s: waiting for domain to come up
	I0127 15:27:35.265285 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | domain NoKubernetes-539934 has defined MAC address 52:54:00:22:2f:c5 in network mk-NoKubernetes-539934
	I0127 15:27:35.265851 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | unable to find current IP address of domain NoKubernetes-539934 in network mk-NoKubernetes-539934
	I0127 15:27:35.265876 1059947 main.go:141] libmachine: (NoKubernetes-539934) DBG | I0127 15:27:35.265822 1059969 retry.go:31] will retry after 2.893549018s: waiting for domain to come up
	I0127 15:27:32.611179 1059050 addons.go:514] duration metric: took 1.61241917s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0127 15:27:33.953252 1059050 pod_ready.go:103] pod "coredns-668d6bf9bc-dtgnk" in "kube-system" namespace has status "Ready":"False"
	I0127 15:27:35.954056 1059050 pod_ready.go:103] pod "coredns-668d6bf9bc-dtgnk" in "kube-system" namespace has status "Ready":"False"
	I0127 15:27:34.189334 1059700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /usr/share/ca-certificates/10128162.pem (1708 bytes)
	I0127 15:27:34.338401 1059700 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 15:27:34.542790 1059700 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 15:27:34.686172 1059700 ssh_runner.go:195] Run: openssl version
	I0127 15:27:34.746762 1059700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1012816.pem && ln -fs /usr/share/ca-certificates/1012816.pem /etc/ssl/certs/1012816.pem"
	I0127 15:27:34.839256 1059700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1012816.pem
	I0127 15:27:34.886441 1059700 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 14:20 /usr/share/ca-certificates/1012816.pem
	I0127 15:27:34.886543 1059700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1012816.pem
	I0127 15:27:34.945236 1059700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1012816.pem /etc/ssl/certs/51391683.0"
	I0127 15:27:34.983233 1059700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10128162.pem && ln -fs /usr/share/ca-certificates/10128162.pem /etc/ssl/certs/10128162.pem"
	I0127 15:27:35.032354 1059700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10128162.pem
	I0127 15:27:35.108703 1059700 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 14:20 /usr/share/ca-certificates/10128162.pem
	I0127 15:27:35.108796 1059700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10128162.pem
	I0127 15:27:35.142632 1059700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10128162.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 15:27:35.246467 1059700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 15:27:35.324523 1059700 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:27:35.345383 1059700 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 14:06 /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:27:35.345477 1059700 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:27:35.367144 1059700 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 15:27:35.413031 1059700 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 15:27:35.449794 1059700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 15:27:35.499141 1059700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 15:27:35.525073 1059700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 15:27:35.549263 1059700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 15:27:35.571179 1059700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 15:27:35.579376 1059700 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 15:27:35.588965 1059700 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-878562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-878562 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:27:35.589134 1059700 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 15:27:35.589225 1059700 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:27:35.656331 1059700 cri.go:89] found id: "72f3cce82d783141775b9278680864a4b04e68c936f81e00bdbd5fd01c7b9330"
	I0127 15:27:35.656410 1059700 cri.go:89] found id: "48d357d68097e224b6a17bad25df580729a3f5edf71f44d3a78c8a1cec8c1b74"
	I0127 15:27:35.656420 1059700 cri.go:89] found id: "0c4d235d46b5d7b225a7ec15565864e6ebfab804609ff90f39b0bf9e9e87e996"
	I0127 15:27:35.656426 1059700 cri.go:89] found id: "897e13c735816502202648667d20a2145ead918c18188c63785b8d06d395a971"
	I0127 15:27:35.656430 1059700 cri.go:89] found id: "0796781308fa8e496fa710a2f4c2434fb66edacb1f81549e30d8f0d95b7bad32"
	I0127 15:27:35.656435 1059700 cri.go:89] found id: "dd281c2e374cb994c6792d1001b5cc5f26a4d8fc460a8fdb98bc681368e4d3ec"
	I0127 15:27:35.656439 1059700 cri.go:89] found id: "9495c3391b42457c2809a5d02cf14de894a87a6c84b8a91166cf24300b12e6ba"
	I0127 15:27:35.656448 1059700 cri.go:89] found id: "38993bf445235b60c577bcecd17feb7bbc98d143d3fee4619c5ad84e5d828e72"
	I0127 15:27:35.656453 1059700 cri.go:89] found id: "ac88071509315b6fd771168d4e446b2ffd5628ac576634f296a525cc8eefec4c"
	I0127 15:27:35.656460 1059700 cri.go:89] found id: "da4f7f007ea2aa50432657efc4ae95b50c451c9d39ab4b759ef4d8db1aff0cb1"
	I0127 15:27:35.656469 1059700 cri.go:89] found id: "2132d0de61620a676f129d13e74fe970ccc8bc25b1638886362512e3907d9532"
	I0127 15:27:35.656473 1059700 cri.go:89] found id: "81b34aa2bc8bd29662cf271b8b486bafe62c61e5438e752ff8c3372fa8daad20"
	I0127 15:27:35.656478 1059700 cri.go:89] found id: "c5f0b5b3399d197591c398ff32c29f43773c27415b9002f81ec1e440a1984459"
	I0127 15:27:35.656482 1059700 cri.go:89] found id: "0f2945c769f360256396072dc9263b39f5459007904fd235076d55a0ec3f78d5"
	I0127 15:27:35.656489 1059700 cri.go:89] found id: ""
	I0127 15:27:35.656546 1059700 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-878562 -n kubernetes-upgrade-878562
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-878562 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-878562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-878562
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-878562: (1.205304063s)
--- FAIL: TestKubernetesUpgrade (421.18s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (61.79s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-243834 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-243834 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.938156621s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-243834] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20321
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-243834" primary control-plane node in "pause-243834" cluster
	* Updating the running kvm2 "pause-243834" VM ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-243834" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 15:23:50.580010 1056546 out.go:345] Setting OutFile to fd 1 ...
	I0127 15:23:50.580215 1056546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:23:50.580226 1056546 out.go:358] Setting ErrFile to fd 2...
	I0127 15:23:50.580232 1056546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:23:50.580604 1056546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 15:23:50.581560 1056546 out.go:352] Setting JSON to false
	I0127 15:23:50.583196 1056546 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":21978,"bootTime":1737969453,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 15:23:50.583298 1056546 start.go:139] virtualization: kvm guest
	I0127 15:23:50.732558 1056546 out.go:177] * [pause-243834] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 15:23:50.847898 1056546 notify.go:220] Checking for updates...
	I0127 15:23:50.859178 1056546 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 15:23:50.860499 1056546 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 15:23:50.861579 1056546 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:23:50.862864 1056546 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:23:50.875147 1056546 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 15:23:51.004360 1056546 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 15:23:51.038922 1056546 config.go:182] Loaded profile config "pause-243834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:23:51.039638 1056546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:23:51.039710 1056546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:23:51.063481 1056546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32887
	I0127 15:23:51.064075 1056546 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:23:51.064772 1056546 main.go:141] libmachine: Using API Version  1
	I0127 15:23:51.064808 1056546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:23:51.065375 1056546 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:23:51.065627 1056546 main.go:141] libmachine: (pause-243834) Calling .DriverName
	I0127 15:23:51.065988 1056546 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 15:23:51.066445 1056546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:23:51.066493 1056546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:23:51.084906 1056546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38849
	I0127 15:23:51.085490 1056546 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:23:51.086111 1056546 main.go:141] libmachine: Using API Version  1
	I0127 15:23:51.086143 1056546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:23:51.086585 1056546 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:23:51.086924 1056546 main.go:141] libmachine: (pause-243834) Calling .DriverName
	I0127 15:23:51.131803 1056546 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 15:23:51.133327 1056546 start.go:297] selected driver: kvm2
	I0127 15:23:51.133359 1056546 start.go:901] validating driver "kvm2" against &{Name:pause-243834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-243834 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.18 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-polic
y:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:23:51.133549 1056546 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 15:23:51.134013 1056546 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:23:51.134106 1056546 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 15:23:51.158412 1056546 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 15:23:51.159330 1056546 cni.go:84] Creating CNI manager for ""
	I0127 15:23:51.159406 1056546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:23:51.159487 1056546 start.go:340] cluster config:
	{Name:pause-243834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-243834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.18 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:f
alse storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:23:51.159628 1056546 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:23:51.161235 1056546 out.go:177] * Starting "pause-243834" primary control-plane node in "pause-243834" cluster
	I0127 15:23:51.162189 1056546 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 15:23:51.162234 1056546 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 15:23:51.162248 1056546 cache.go:56] Caching tarball of preloaded images
	I0127 15:23:51.162366 1056546 preload.go:172] Found /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 15:23:51.162380 1056546 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 15:23:51.162547 1056546 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/pause-243834/config.json ...
	I0127 15:23:51.162779 1056546 start.go:360] acquireMachinesLock for pause-243834: {Name:mk884e36253ca066a698970989f20649e5f9cbef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 15:23:51.162828 1056546 start.go:364] duration metric: took 27.038µs to acquireMachinesLock for "pause-243834"
	I0127 15:23:51.162852 1056546 start.go:96] Skipping create...Using existing machine configuration
	I0127 15:23:51.162880 1056546 fix.go:54] fixHost starting: 
	I0127 15:23:51.163235 1056546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:23:51.163279 1056546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:23:51.189922 1056546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41397
	I0127 15:23:51.190788 1056546 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:23:51.191642 1056546 main.go:141] libmachine: Using API Version  1
	I0127 15:23:51.191678 1056546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:23:51.192126 1056546 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:23:51.192465 1056546 main.go:141] libmachine: (pause-243834) Calling .DriverName
	I0127 15:23:51.192684 1056546 main.go:141] libmachine: (pause-243834) Calling .GetState
	I0127 15:23:51.195327 1056546 fix.go:112] recreateIfNeeded on pause-243834: state=Running err=<nil>
	W0127 15:23:51.195351 1056546 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 15:23:51.197226 1056546 out.go:177] * Updating the running kvm2 "pause-243834" VM ...
	I0127 15:23:51.198634 1056546 machine.go:93] provisionDockerMachine start ...
	I0127 15:23:51.198661 1056546 main.go:141] libmachine: (pause-243834) Calling .DriverName
	I0127 15:23:51.198951 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHHostname
	I0127 15:23:51.202037 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:51.202687 1056546 main.go:141] libmachine: (pause-243834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c0:95", ip: ""} in network mk-pause-243834: {Iface:virbr4 ExpiryTime:2025-01-27 16:23:04 +0000 UTC Type:0 Mac:52:54:00:15:c0:95 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:pause-243834 Clientid:01:52:54:00:15:c0:95}
	I0127 15:23:51.202710 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined IP address 192.168.72.18 and MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:51.203045 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHPort
	I0127 15:23:51.203246 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHKeyPath
	I0127 15:23:51.203456 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHKeyPath
	I0127 15:23:51.203612 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHUsername
	I0127 15:23:51.203801 1056546 main.go:141] libmachine: Using SSH client type: native
	I0127 15:23:51.204070 1056546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I0127 15:23:51.204084 1056546 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 15:23:51.323453 1056546 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-243834
	
	I0127 15:23:51.323491 1056546 main.go:141] libmachine: (pause-243834) Calling .GetMachineName
	I0127 15:23:51.323793 1056546 buildroot.go:166] provisioning hostname "pause-243834"
	I0127 15:23:51.323824 1056546 main.go:141] libmachine: (pause-243834) Calling .GetMachineName
	I0127 15:23:51.323965 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHHostname
	I0127 15:23:51.327164 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:51.327529 1056546 main.go:141] libmachine: (pause-243834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c0:95", ip: ""} in network mk-pause-243834: {Iface:virbr4 ExpiryTime:2025-01-27 16:23:04 +0000 UTC Type:0 Mac:52:54:00:15:c0:95 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:pause-243834 Clientid:01:52:54:00:15:c0:95}
	I0127 15:23:51.327653 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined IP address 192.168.72.18 and MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:51.327854 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHPort
	I0127 15:23:51.328053 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHKeyPath
	I0127 15:23:51.328294 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHKeyPath
	I0127 15:23:51.328417 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHUsername
	I0127 15:23:51.328629 1056546 main.go:141] libmachine: Using SSH client type: native
	I0127 15:23:51.329055 1056546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I0127 15:23:51.329083 1056546 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-243834 && echo "pause-243834" | sudo tee /etc/hostname
	I0127 15:23:51.490272 1056546 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-243834
	
	I0127 15:23:51.490310 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHHostname
	I0127 15:23:51.493748 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:51.494208 1056546 main.go:141] libmachine: (pause-243834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c0:95", ip: ""} in network mk-pause-243834: {Iface:virbr4 ExpiryTime:2025-01-27 16:23:04 +0000 UTC Type:0 Mac:52:54:00:15:c0:95 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:pause-243834 Clientid:01:52:54:00:15:c0:95}
	I0127 15:23:51.494235 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined IP address 192.168.72.18 and MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:51.494571 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHPort
	I0127 15:23:51.494766 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHKeyPath
	I0127 15:23:51.494936 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHKeyPath
	I0127 15:23:51.495118 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHUsername
	I0127 15:23:51.495323 1056546 main.go:141] libmachine: Using SSH client type: native
	I0127 15:23:51.495573 1056546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I0127 15:23:51.495598 1056546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-243834' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-243834/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-243834' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 15:23:51.626893 1056546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 15:23:51.626927 1056546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20321-1005652/.minikube CaCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20321-1005652/.minikube}
	I0127 15:23:51.626950 1056546 buildroot.go:174] setting up certificates
	I0127 15:23:51.626964 1056546 provision.go:84] configureAuth start
	I0127 15:23:51.626976 1056546 main.go:141] libmachine: (pause-243834) Calling .GetMachineName
	I0127 15:23:51.627306 1056546 main.go:141] libmachine: (pause-243834) Calling .GetIP
	I0127 15:23:51.630402 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:51.630820 1056546 main.go:141] libmachine: (pause-243834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c0:95", ip: ""} in network mk-pause-243834: {Iface:virbr4 ExpiryTime:2025-01-27 16:23:04 +0000 UTC Type:0 Mac:52:54:00:15:c0:95 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:pause-243834 Clientid:01:52:54:00:15:c0:95}
	I0127 15:23:51.630851 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined IP address 192.168.72.18 and MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:51.630974 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHHostname
	I0127 15:23:51.633468 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:51.633850 1056546 main.go:141] libmachine: (pause-243834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c0:95", ip: ""} in network mk-pause-243834: {Iface:virbr4 ExpiryTime:2025-01-27 16:23:04 +0000 UTC Type:0 Mac:52:54:00:15:c0:95 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:pause-243834 Clientid:01:52:54:00:15:c0:95}
	I0127 15:23:51.633908 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined IP address 192.168.72.18 and MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:51.634119 1056546 provision.go:143] copyHostCerts
	I0127 15:23:51.634202 1056546 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem, removing ...
	I0127 15:23:51.634227 1056546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem
	I0127 15:23:51.634302 1056546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem (1123 bytes)
	I0127 15:23:51.634423 1056546 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem, removing ...
	I0127 15:23:51.634434 1056546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem
	I0127 15:23:51.634462 1056546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem (1679 bytes)
	I0127 15:23:51.634545 1056546 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem, removing ...
	I0127 15:23:51.634555 1056546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem
	I0127 15:23:51.634578 1056546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem (1078 bytes)
	I0127 15:23:51.634654 1056546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem org=jenkins.pause-243834 san=[127.0.0.1 192.168.72.18 localhost minikube pause-243834]
	I0127 15:23:51.796508 1056546 provision.go:177] copyRemoteCerts
	I0127 15:23:51.796589 1056546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 15:23:51.796625 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHHostname
	I0127 15:23:51.800221 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:51.800702 1056546 main.go:141] libmachine: (pause-243834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c0:95", ip: ""} in network mk-pause-243834: {Iface:virbr4 ExpiryTime:2025-01-27 16:23:04 +0000 UTC Type:0 Mac:52:54:00:15:c0:95 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:pause-243834 Clientid:01:52:54:00:15:c0:95}
	I0127 15:23:51.800728 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined IP address 192.168.72.18 and MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:51.801035 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHPort
	I0127 15:23:51.801315 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHKeyPath
	I0127 15:23:51.801521 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHUsername
	I0127 15:23:51.801717 1056546 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/pause-243834/id_rsa Username:docker}
	I0127 15:23:51.900247 1056546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 15:23:51.936183 1056546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 15:23:51.971925 1056546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 15:23:52.006302 1056546 provision.go:87] duration metric: took 379.321346ms to configureAuth
	I0127 15:23:52.006342 1056546 buildroot.go:189] setting minikube options for container-runtime
	I0127 15:23:52.006633 1056546 config.go:182] Loaded profile config "pause-243834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:23:52.006731 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHHostname
	I0127 15:23:52.009921 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:52.010347 1056546 main.go:141] libmachine: (pause-243834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c0:95", ip: ""} in network mk-pause-243834: {Iface:virbr4 ExpiryTime:2025-01-27 16:23:04 +0000 UTC Type:0 Mac:52:54:00:15:c0:95 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:pause-243834 Clientid:01:52:54:00:15:c0:95}
	I0127 15:23:52.010379 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined IP address 192.168.72.18 and MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:52.010585 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHPort
	I0127 15:23:52.010823 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHKeyPath
	I0127 15:23:52.011010 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHKeyPath
	I0127 15:23:52.011164 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHUsername
	I0127 15:23:52.011343 1056546 main.go:141] libmachine: Using SSH client type: native
	I0127 15:23:52.011580 1056546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I0127 15:23:52.011601 1056546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 15:23:57.622401 1056546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 15:23:57.622438 1056546 machine.go:96] duration metric: took 6.423788173s to provisionDockerMachine
	I0127 15:23:57.622453 1056546 start.go:293] postStartSetup for "pause-243834" (driver="kvm2")
	I0127 15:23:57.622486 1056546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 15:23:57.622517 1056546 main.go:141] libmachine: (pause-243834) Calling .DriverName
	I0127 15:23:57.622938 1056546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 15:23:57.622977 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHHostname
	I0127 15:23:57.626298 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:57.626712 1056546 main.go:141] libmachine: (pause-243834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c0:95", ip: ""} in network mk-pause-243834: {Iface:virbr4 ExpiryTime:2025-01-27 16:23:04 +0000 UTC Type:0 Mac:52:54:00:15:c0:95 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:pause-243834 Clientid:01:52:54:00:15:c0:95}
	I0127 15:23:57.626751 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined IP address 192.168.72.18 and MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:57.627003 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHPort
	I0127 15:23:57.627205 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHKeyPath
	I0127 15:23:57.627348 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHUsername
	I0127 15:23:57.627491 1056546 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/pause-243834/id_rsa Username:docker}
	I0127 15:23:57.716958 1056546 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 15:23:57.721939 1056546 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 15:23:57.721975 1056546 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/addons for local assets ...
	I0127 15:23:57.722095 1056546 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/files for local assets ...
	I0127 15:23:57.722227 1056546 filesync.go:149] local asset: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem -> 10128162.pem in /etc/ssl/certs
	I0127 15:23:57.722369 1056546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 15:23:57.733042 1056546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:23:57.761770 1056546 start.go:296] duration metric: took 139.297375ms for postStartSetup
	I0127 15:23:57.761826 1056546 fix.go:56] duration metric: took 6.59896354s for fixHost
	I0127 15:23:57.761854 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHHostname
	I0127 15:23:57.764849 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:57.765328 1056546 main.go:141] libmachine: (pause-243834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c0:95", ip: ""} in network mk-pause-243834: {Iface:virbr4 ExpiryTime:2025-01-27 16:23:04 +0000 UTC Type:0 Mac:52:54:00:15:c0:95 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:pause-243834 Clientid:01:52:54:00:15:c0:95}
	I0127 15:23:57.765360 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined IP address 192.168.72.18 and MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:57.765567 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHPort
	I0127 15:23:57.765772 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHKeyPath
	I0127 15:23:57.765942 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHKeyPath
	I0127 15:23:57.766102 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHUsername
	I0127 15:23:57.766287 1056546 main.go:141] libmachine: Using SSH client type: native
	I0127 15:23:57.766502 1056546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.18 22 <nil> <nil>}
	I0127 15:23:57.766513 1056546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 15:23:57.890009 1056546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737991437.883221746
	
	I0127 15:23:57.890029 1056546 fix.go:216] guest clock: 1737991437.883221746
	I0127 15:23:57.890039 1056546 fix.go:229] Guest: 2025-01-27 15:23:57.883221746 +0000 UTC Remote: 2025-01-27 15:23:57.761832076 +0000 UTC m=+7.247753855 (delta=121.38967ms)
	I0127 15:23:57.890076 1056546 fix.go:200] guest clock delta is within tolerance: 121.38967ms
	I0127 15:23:57.890085 1056546 start.go:83] releasing machines lock for "pause-243834", held for 6.727243847s
	I0127 15:23:57.890108 1056546 main.go:141] libmachine: (pause-243834) Calling .DriverName
	I0127 15:23:57.890356 1056546 main.go:141] libmachine: (pause-243834) Calling .GetIP
	I0127 15:23:57.893038 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:57.893409 1056546 main.go:141] libmachine: (pause-243834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c0:95", ip: ""} in network mk-pause-243834: {Iface:virbr4 ExpiryTime:2025-01-27 16:23:04 +0000 UTC Type:0 Mac:52:54:00:15:c0:95 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:pause-243834 Clientid:01:52:54:00:15:c0:95}
	I0127 15:23:57.893453 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined IP address 192.168.72.18 and MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:57.893601 1056546 main.go:141] libmachine: (pause-243834) Calling .DriverName
	I0127 15:23:57.894180 1056546 main.go:141] libmachine: (pause-243834) Calling .DriverName
	I0127 15:23:57.894380 1056546 main.go:141] libmachine: (pause-243834) Calling .DriverName
	I0127 15:23:57.894474 1056546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 15:23:57.894512 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHHostname
	I0127 15:23:57.894640 1056546 ssh_runner.go:195] Run: cat /version.json
	I0127 15:23:57.894668 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHHostname
	I0127 15:23:57.897627 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:57.897708 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:57.898067 1056546 main.go:141] libmachine: (pause-243834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c0:95", ip: ""} in network mk-pause-243834: {Iface:virbr4 ExpiryTime:2025-01-27 16:23:04 +0000 UTC Type:0 Mac:52:54:00:15:c0:95 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:pause-243834 Clientid:01:52:54:00:15:c0:95}
	I0127 15:23:57.898116 1056546 main.go:141] libmachine: (pause-243834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c0:95", ip: ""} in network mk-pause-243834: {Iface:virbr4 ExpiryTime:2025-01-27 16:23:04 +0000 UTC Type:0 Mac:52:54:00:15:c0:95 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:pause-243834 Clientid:01:52:54:00:15:c0:95}
	I0127 15:23:57.898135 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined IP address 192.168.72.18 and MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:57.898147 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined IP address 192.168.72.18 and MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:23:57.898337 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHPort
	I0127 15:23:57.898390 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHPort
	I0127 15:23:57.898536 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHKeyPath
	I0127 15:23:57.898547 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHKeyPath
	I0127 15:23:57.898702 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHUsername
	I0127 15:23:57.898729 1056546 main.go:141] libmachine: (pause-243834) Calling .GetSSHUsername
	I0127 15:23:57.898787 1056546 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/pause-243834/id_rsa Username:docker}
	I0127 15:23:57.898906 1056546 sshutil.go:53] new ssh client: &{IP:192.168.72.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/pause-243834/id_rsa Username:docker}
	I0127 15:23:57.987350 1056546 ssh_runner.go:195] Run: systemctl --version
	I0127 15:23:58.013979 1056546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 15:23:58.188171 1056546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 15:23:58.194736 1056546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 15:23:58.194816 1056546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 15:23:58.204744 1056546 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 15:23:58.204774 1056546 start.go:495] detecting cgroup driver to use...
	I0127 15:23:58.204852 1056546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 15:23:58.229038 1056546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 15:23:58.246552 1056546 docker.go:217] disabling cri-docker service (if available) ...
	I0127 15:23:58.246613 1056546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 15:23:58.261779 1056546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 15:23:58.277449 1056546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 15:23:58.423075 1056546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 15:23:58.587532 1056546 docker.go:233] disabling docker service ...
	I0127 15:23:58.587617 1056546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 15:23:58.608521 1056546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 15:23:58.624021 1056546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 15:23:58.779539 1056546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 15:23:58.929889 1056546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 15:23:58.945639 1056546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 15:23:58.971226 1056546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 15:23:58.971307 1056546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:23:58.986693 1056546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 15:23:58.986788 1056546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:23:59.001975 1056546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:23:59.014794 1056546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:23:59.026569 1056546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 15:23:59.038681 1056546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:23:59.054427 1056546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:23:59.070896 1056546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:23:59.085486 1056546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 15:23:59.098882 1056546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 15:23:59.112601 1056546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:23:59.351183 1056546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 15:23:59.894163 1056546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 15:23:59.894230 1056546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 15:23:59.905648 1056546 start.go:563] Will wait 60s for crictl version
	I0127 15:23:59.905708 1056546 ssh_runner.go:195] Run: which crictl
	I0127 15:23:59.914886 1056546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 15:23:59.964358 1056546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 15:23:59.964438 1056546 ssh_runner.go:195] Run: crio --version
	I0127 15:24:00.020470 1056546 ssh_runner.go:195] Run: crio --version
	I0127 15:24:00.063254 1056546 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 15:24:00.064648 1056546 main.go:141] libmachine: (pause-243834) Calling .GetIP
	I0127 15:24:00.068154 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:24:00.068716 1056546 main.go:141] libmachine: (pause-243834) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c0:95", ip: ""} in network mk-pause-243834: {Iface:virbr4 ExpiryTime:2025-01-27 16:23:04 +0000 UTC Type:0 Mac:52:54:00:15:c0:95 Iaid: IPaddr:192.168.72.18 Prefix:24 Hostname:pause-243834 Clientid:01:52:54:00:15:c0:95}
	I0127 15:24:00.068740 1056546 main.go:141] libmachine: (pause-243834) DBG | domain pause-243834 has defined IP address 192.168.72.18 and MAC address 52:54:00:15:c0:95 in network mk-pause-243834
	I0127 15:24:00.069038 1056546 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 15:24:00.075410 1056546 kubeadm.go:883] updating cluster {Name:pause-243834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-243834 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.18 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portain
er:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 15:24:00.075584 1056546 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 15:24:00.075652 1056546 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:24:00.130009 1056546 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 15:24:00.130048 1056546 crio.go:433] Images already preloaded, skipping extraction
	I0127 15:24:00.130116 1056546 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:24:00.180632 1056546 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 15:24:00.180671 1056546 cache_images.go:84] Images are preloaded, skipping loading
	I0127 15:24:00.180682 1056546 kubeadm.go:934] updating node { 192.168.72.18 8443 v1.32.1 crio true true} ...
	I0127 15:24:00.180848 1056546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-243834 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:pause-243834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 15:24:00.180954 1056546 ssh_runner.go:195] Run: crio config
	I0127 15:24:00.258546 1056546 cni.go:84] Creating CNI manager for ""
	I0127 15:24:00.258580 1056546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:24:00.258593 1056546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 15:24:00.258630 1056546 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.18 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-243834 NodeName:pause-243834 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 15:24:00.258857 1056546 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-243834"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.18"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.18"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 15:24:00.258944 1056546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 15:24:00.271298 1056546 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 15:24:00.271368 1056546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 15:24:00.286506 1056546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0127 15:24:00.307405 1056546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 15:24:00.329833 1056546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0127 15:24:00.368145 1056546 ssh_runner.go:195] Run: grep 192.168.72.18	control-plane.minikube.internal$ /etc/hosts
	I0127 15:24:00.373763 1056546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:24:00.604970 1056546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:24:00.638204 1056546 certs.go:68] Setting up /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/pause-243834 for IP: 192.168.72.18
	I0127 15:24:00.638236 1056546 certs.go:194] generating shared ca certs ...
	I0127 15:24:00.638259 1056546 certs.go:226] acquiring lock for ca certs: {Name:mk0f815247e4bef2bde3ddcae95639d1cd9cab24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:24:00.638455 1056546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key
	I0127 15:24:00.638519 1056546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key
	I0127 15:24:00.638533 1056546 certs.go:256] generating profile certs ...
	I0127 15:24:00.638640 1056546 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/pause-243834/client.key
	I0127 15:24:00.638713 1056546 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/pause-243834/apiserver.key.c8cbbadd
	I0127 15:24:00.638759 1056546 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/pause-243834/proxy-client.key
	I0127 15:24:00.638893 1056546 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem (1338 bytes)
	W0127 15:24:00.638934 1056546 certs.go:480] ignoring /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816_empty.pem, impossibly tiny 0 bytes
	I0127 15:24:00.638947 1056546 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 15:24:00.638979 1056546 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem (1078 bytes)
	I0127 15:24:00.639017 1056546 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem (1123 bytes)
	I0127 15:24:00.639045 1056546 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem (1679 bytes)
	I0127 15:24:00.639104 1056546 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:24:00.640053 1056546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 15:24:00.688098 1056546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 15:24:00.754415 1056546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 15:24:00.848806 1056546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 15:24:00.919786 1056546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/pause-243834/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 15:24:01.025550 1056546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/pause-243834/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 15:24:01.087358 1056546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/pause-243834/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 15:24:01.153777 1056546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/pause-243834/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 15:24:01.213845 1056546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /usr/share/ca-certificates/10128162.pem (1708 bytes)
	I0127 15:24:01.252305 1056546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 15:24:01.284698 1056546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem --> /usr/share/ca-certificates/1012816.pem (1338 bytes)
	I0127 15:24:01.314806 1056546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 15:24:01.335862 1056546 ssh_runner.go:195] Run: openssl version
	I0127 15:24:01.348321 1056546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1012816.pem && ln -fs /usr/share/ca-certificates/1012816.pem /etc/ssl/certs/1012816.pem"
	I0127 15:24:01.368507 1056546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1012816.pem
	I0127 15:24:01.375593 1056546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 14:20 /usr/share/ca-certificates/1012816.pem
	I0127 15:24:01.375662 1056546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1012816.pem
	I0127 15:24:01.390124 1056546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1012816.pem /etc/ssl/certs/51391683.0"
	I0127 15:24:01.435307 1056546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10128162.pem && ln -fs /usr/share/ca-certificates/10128162.pem /etc/ssl/certs/10128162.pem"
	I0127 15:24:01.449706 1056546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10128162.pem
	I0127 15:24:01.459621 1056546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 14:20 /usr/share/ca-certificates/10128162.pem
	I0127 15:24:01.459711 1056546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10128162.pem
	I0127 15:24:01.478938 1056546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10128162.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 15:24:01.510586 1056546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 15:24:01.529807 1056546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:24:01.540760 1056546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 14:06 /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:24:01.540831 1056546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:24:01.551243 1056546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 15:24:01.565474 1056546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 15:24:01.571216 1056546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 15:24:01.582138 1056546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 15:24:01.591749 1056546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 15:24:01.600845 1056546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 15:24:01.614626 1056546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 15:24:01.624908 1056546 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 15:24:01.635822 1056546 kubeadm.go:392] StartCluster: {Name:pause-243834 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-243834 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.18 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:
false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:24:01.635976 1056546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 15:24:01.636027 1056546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:24:01.705682 1056546 cri.go:89] found id: "457ca2d324e1d626170e1d26b5f6816656542a8251ac9df739edb03caadf29c8"
	I0127 15:24:01.705706 1056546 cri.go:89] found id: "813ee4bb11a157a86c4578d8422591b7414cbb4898620e10f3232b221d2a1c36"
	I0127 15:24:01.705712 1056546 cri.go:89] found id: "8b4541fde6a62bdf6bb80c3d9a522cbe540ab288daf14ab74f7cce7dadf5ea27"
	I0127 15:24:01.705716 1056546 cri.go:89] found id: "f51148737687a2de3ed760a908b038c26b2e3bea7f1a4ffc43691d252c9cc1cd"
	I0127 15:24:01.705720 1056546 cri.go:89] found id: "c94cbaf412489c9f15fea5d0b062c484ad3101c587a53e54935e0ed3f8af7bae"
	I0127 15:24:01.705724 1056546 cri.go:89] found id: "402e01205daccc10373226dbbb2f7f33af08c430590dd765f6de941bdd00cbd6"
	I0127 15:24:01.705728 1056546 cri.go:89] found id: "b0a4538f0f55485dc4dad4bcaffd1790bf664eb36654ba58d8a516e3fcadf20e"
	I0127 15:24:01.705732 1056546 cri.go:89] found id: "1f801f8f745ddcc06b2ce6c205e1fa5a62d4284f65c95a2ada369e628d2e68f6"
	I0127 15:24:01.705736 1056546 cri.go:89] found id: "efd6d8fb54e095a0cb5b4dcc0eef963899180e443e6520b5ea4f8cbc11ef6c96"
	I0127 15:24:01.705744 1056546 cri.go:89] found id: "a80fc674e0622b49b43fcbe7d43ceddfc39ff8df93736678e5bf40fce5cc2b5a"
	I0127 15:24:01.705749 1056546 cri.go:89] found id: ""
	I0127 15:24:01.705793 1056546 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-243834 -n pause-243834
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-243834 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-243834 logs -n 25: (1.631874111s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-230388 sudo                                | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo cat                            | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo cat                            | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo                                | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo                                | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo                                | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo cat                            | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo cat                            | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo                                | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo                                | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo                                | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo find                           | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo crio                           | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-230388                                     | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC | 27 Jan 25 15:21 UTC |
	| start   | -p running-upgrade-846704                            | minikube                  | jenkins | v1.26.0 | 27 Jan 25 15:21 UTC | 27 Jan 25 15:23 UTC |
	|         | --memory=2200 --vm-driver=kvm2                       |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                           |         |         |                     |                     |
	| delete  | -p offline-crio-845871                               | offline-crio-845871       | jenkins | v1.35.0 | 27 Jan 25 15:22 UTC | 27 Jan 25 15:22 UTC |
	| start   | -p pause-243834 --memory=2048                        | pause-243834              | jenkins | v1.35.0 | 27 Jan 25 15:22 UTC | 27 Jan 25 15:23 UTC |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                             |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-861726 stop                          | minikube                  | jenkins | v1.26.0 | 27 Jan 25 15:22 UTC | 27 Jan 25 15:22 UTC |
	| start   | -p stopped-upgrade-861726                            | stopped-upgrade-861726    | jenkins | v1.35.0 | 27 Jan 25 15:22 UTC | 27 Jan 25 15:23 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p running-upgrade-846704                            | running-upgrade-846704    | jenkins | v1.35.0 | 27 Jan 25 15:23 UTC | 27 Jan 25 15:24 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-243834                                      | pause-243834              | jenkins | v1.35.0 | 27 Jan 25 15:23 UTC | 27 Jan 25 15:24 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-861726                            | stopped-upgrade-861726    | jenkins | v1.35.0 | 27 Jan 25 15:24 UTC | 27 Jan 25 15:24 UTC |
	| start   | -p force-systemd-flag-937953                         | force-systemd-flag-937953 | jenkins | v1.35.0 | 27 Jan 25 15:24 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-846704                            | running-upgrade-846704    | jenkins | v1.35.0 | 27 Jan 25 15:24 UTC | 27 Jan 25 15:24 UTC |
	| start   | -p force-systemd-env-766957                          | force-systemd-env-766957  | jenkins | v1.35.0 | 27 Jan 25 15:24 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 15:24:43
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 15:24:43.248744 1057200 out.go:345] Setting OutFile to fd 1 ...
	I0127 15:24:43.249031 1057200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:24:43.249042 1057200 out.go:358] Setting ErrFile to fd 2...
	I0127 15:24:43.249050 1057200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:24:43.249261 1057200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 15:24:43.249908 1057200 out.go:352] Setting JSON to false
	I0127 15:24:43.251012 1057200 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":22030,"bootTime":1737969453,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 15:24:43.251109 1057200 start.go:139] virtualization: kvm guest
	I0127 15:24:43.253637 1057200 out.go:177] * [force-systemd-env-766957] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 15:24:43.255186 1057200 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 15:24:43.255218 1057200 notify.go:220] Checking for updates...
	I0127 15:24:43.258140 1057200 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 15:24:43.259502 1057200 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:24:43.260788 1057200 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:24:43.262165 1057200 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 15:24:43.263516 1057200 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0127 15:24:43.265397 1057200 config.go:182] Loaded profile config "force-systemd-flag-937953": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:24:43.265512 1057200 config.go:182] Loaded profile config "kubernetes-upgrade-878562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 15:24:43.265658 1057200 config.go:182] Loaded profile config "pause-243834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:24:43.265771 1057200 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 15:24:43.303893 1057200 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 15:24:43.305471 1057200 start.go:297] selected driver: kvm2
	I0127 15:24:43.305492 1057200 start.go:901] validating driver "kvm2" against <nil>
	I0127 15:24:43.305509 1057200 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 15:24:43.306323 1057200 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:24:43.306408 1057200 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 15:24:43.321872 1057200 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 15:24:43.321932 1057200 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 15:24:43.322173 1057200 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 15:24:43.322208 1057200 cni.go:84] Creating CNI manager for ""
	I0127 15:24:43.322280 1057200 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:24:43.322297 1057200 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 15:24:43.322389 1057200 start.go:340] cluster config:
	{Name:force-systemd-env-766957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:force-systemd-env-766957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:24:43.322503 1057200 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:24:43.324339 1057200 out.go:177] * Starting "force-systemd-env-766957" primary control-plane node in "force-systemd-env-766957" cluster
	I0127 15:24:43.325579 1057200 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 15:24:43.325619 1057200 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 15:24:43.325636 1057200 cache.go:56] Caching tarball of preloaded images
	I0127 15:24:43.325734 1057200 preload.go:172] Found /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 15:24:43.325746 1057200 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 15:24:43.325859 1057200 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/force-systemd-env-766957/config.json ...
	I0127 15:24:43.325885 1057200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/force-systemd-env-766957/config.json: {Name:mkc198f6b5eab49badd99b053a7a3dc61a99add9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:24:43.326065 1057200 start.go:360] acquireMachinesLock for force-systemd-env-766957: {Name:mk884e36253ca066a698970989f20649e5f9cbef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 15:24:43.326107 1057200 start.go:364] duration metric: took 23.754µs to acquireMachinesLock for "force-systemd-env-766957"
	I0127 15:24:43.326131 1057200 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-766957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:force-syst
emd-env-766957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 15:24:43.326208 1057200 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 15:24:42.057241 1056546 pod_ready.go:103] pod "etcd-pause-243834" in "kube-system" namespace has status "Ready":"False"
	I0127 15:24:43.558057 1056546 pod_ready.go:93] pod "etcd-pause-243834" in "kube-system" namespace has status "Ready":"True"
	I0127 15:24:43.558085 1056546 pod_ready.go:82] duration metric: took 14.507887932s for pod "etcd-pause-243834" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:43.558100 1056546 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-243834" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:43.564032 1056546 pod_ready.go:93] pod "kube-apiserver-pause-243834" in "kube-system" namespace has status "Ready":"True"
	I0127 15:24:43.564058 1056546 pod_ready.go:82] duration metric: took 5.950977ms for pod "kube-apiserver-pause-243834" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:43.564075 1056546 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-243834" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:43.569876 1056546 pod_ready.go:93] pod "kube-controller-manager-pause-243834" in "kube-system" namespace has status "Ready":"True"
	I0127 15:24:43.569904 1056546 pod_ready.go:82] duration metric: took 5.82103ms for pod "kube-controller-manager-pause-243834" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:43.569917 1056546 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-68s7d" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:43.576393 1056546 pod_ready.go:93] pod "kube-proxy-68s7d" in "kube-system" namespace has status "Ready":"True"
	I0127 15:24:43.576423 1056546 pod_ready.go:82] duration metric: took 6.497499ms for pod "kube-proxy-68s7d" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:43.576437 1056546 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-243834" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:43.583520 1056546 pod_ready.go:93] pod "kube-scheduler-pause-243834" in "kube-system" namespace has status "Ready":"True"
	I0127 15:24:43.583545 1056546 pod_ready.go:82] duration metric: took 7.098844ms for pod "kube-scheduler-pause-243834" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:43.583555 1056546 pod_ready.go:39] duration metric: took 14.543186439s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:24:43.583577 1056546 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 15:24:43.604570 1056546 ops.go:34] apiserver oom_adj: -16
	I0127 15:24:43.604603 1056546 kubeadm.go:597] duration metric: took 41.823680465s to restartPrimaryControlPlane
	I0127 15:24:43.604617 1056546 kubeadm.go:394] duration metric: took 41.968806972s to StartCluster
	I0127 15:24:43.604642 1056546 settings.go:142] acquiring lock: {Name:mk76722a8563186e0a733747d866232f054026c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:24:43.604732 1056546 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:24:43.605585 1056546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:24:43.605842 1056546 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.18 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 15:24:43.605896 1056546 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 15:24:43.606185 1056546 config.go:182] Loaded profile config "pause-243834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:24:43.607848 1056546 out.go:177] * Enabled addons: 
	I0127 15:24:43.607872 1056546 out.go:177] * Verifying Kubernetes components...
	I0127 15:24:43.609154 1056546 addons.go:514] duration metric: took 3.266499ms for enable addons: enabled=[]
	I0127 15:24:43.609199 1056546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:24:43.828838 1056546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:24:43.851812 1056546 node_ready.go:35] waiting up to 6m0s for node "pause-243834" to be "Ready" ...
	I0127 15:24:43.855555 1056546 node_ready.go:49] node "pause-243834" has status "Ready":"True"
	I0127 15:24:43.855579 1056546 node_ready.go:38] duration metric: took 3.715999ms for node "pause-243834" to be "Ready" ...
	I0127 15:24:43.855589 1056546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:24:43.959555 1056546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-2sw96" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:44.355319 1056546 pod_ready.go:93] pod "coredns-668d6bf9bc-2sw96" in "kube-system" namespace has status "Ready":"True"
	I0127 15:24:44.355347 1056546 pod_ready.go:82] duration metric: took 395.756776ms for pod "coredns-668d6bf9bc-2sw96" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:44.355360 1056546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-243834" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:44.756003 1056546 pod_ready.go:93] pod "etcd-pause-243834" in "kube-system" namespace has status "Ready":"True"
	I0127 15:24:44.756032 1056546 pod_ready.go:82] duration metric: took 400.664212ms for pod "etcd-pause-243834" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:44.756047 1056546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-243834" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:45.156021 1056546 pod_ready.go:93] pod "kube-apiserver-pause-243834" in "kube-system" namespace has status "Ready":"True"
	I0127 15:24:45.156045 1056546 pod_ready.go:82] duration metric: took 399.982465ms for pod "kube-apiserver-pause-243834" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:45.156056 1056546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-243834" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:45.555840 1056546 pod_ready.go:93] pod "kube-controller-manager-pause-243834" in "kube-system" namespace has status "Ready":"True"
	I0127 15:24:45.555876 1056546 pod_ready.go:82] duration metric: took 399.811817ms for pod "kube-controller-manager-pause-243834" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:45.555893 1056546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-68s7d" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:46.290358 1056775 kubeadm.go:310] [api-check] The API server is healthy after 5.502270497s
	I0127 15:24:46.308746 1056775 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 15:24:46.327143 1056775 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 15:24:46.370298 1056775 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 15:24:46.370561 1056775 kubeadm.go:310] [mark-control-plane] Marking the node force-systemd-flag-937953 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 15:24:46.386543 1056775 kubeadm.go:310] [bootstrap-token] Using token: 4uv25w.n4lqo3b4zl4f8en4
	I0127 15:24:46.388199 1056775 out.go:235]   - Configuring RBAC rules ...
	I0127 15:24:46.388352 1056775 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 15:24:46.403137 1056775 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 15:24:46.415966 1056775 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 15:24:46.420331 1056775 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 15:24:46.426911 1056775 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 15:24:46.438488 1056775 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 15:24:45.954957 1056546 pod_ready.go:93] pod "kube-proxy-68s7d" in "kube-system" namespace has status "Ready":"True"
	I0127 15:24:45.954986 1056546 pod_ready.go:82] duration metric: took 399.084689ms for pod "kube-proxy-68s7d" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:45.955001 1056546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-243834" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:46.355476 1056546 pod_ready.go:93] pod "kube-scheduler-pause-243834" in "kube-system" namespace has status "Ready":"True"
	I0127 15:24:46.355511 1056546 pod_ready.go:82] duration metric: took 400.500511ms for pod "kube-scheduler-pause-243834" in "kube-system" namespace to be "Ready" ...
	I0127 15:24:46.355525 1056546 pod_ready.go:39] duration metric: took 2.499924533s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:24:46.355546 1056546 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:24:46.355603 1056546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:24:46.377339 1056546 api_server.go:72] duration metric: took 2.771455477s to wait for apiserver process to appear ...
	I0127 15:24:46.377375 1056546 api_server.go:88] waiting for apiserver healthz status ...
	I0127 15:24:46.377410 1056546 api_server.go:253] Checking apiserver healthz at https://192.168.72.18:8443/healthz ...
	I0127 15:24:46.383631 1056546 api_server.go:279] https://192.168.72.18:8443/healthz returned 200:
	ok
	I0127 15:24:46.384762 1056546 api_server.go:141] control plane version: v1.32.1
	I0127 15:24:46.384795 1056546 api_server.go:131] duration metric: took 7.410023ms to wait for apiserver health ...
	I0127 15:24:46.384805 1056546 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 15:24:46.557561 1056546 system_pods.go:59] 6 kube-system pods found
	I0127 15:24:46.557601 1056546 system_pods.go:61] "coredns-668d6bf9bc-2sw96" [b56c511d-a960-4b42-ad94-6ceb46987306] Running
	I0127 15:24:46.557607 1056546 system_pods.go:61] "etcd-pause-243834" [69834d77-38c2-4a99-8e13-15a40b35c51d] Running
	I0127 15:24:46.557612 1056546 system_pods.go:61] "kube-apiserver-pause-243834" [76222e3c-c2fa-4dd1-8459-ba69e9a592d1] Running
	I0127 15:24:46.557615 1056546 system_pods.go:61] "kube-controller-manager-pause-243834" [b7f1126d-c75e-4251-97d7-c9273f82801a] Running
	I0127 15:24:46.557619 1056546 system_pods.go:61] "kube-proxy-68s7d" [6fd80540-1ea3-4591-8ddc-68031a7950f5] Running
	I0127 15:24:46.557622 1056546 system_pods.go:61] "kube-scheduler-pause-243834" [60a69e96-b7f7-49ca-a7d4-4065de7648a9] Running
	I0127 15:24:46.557628 1056546 system_pods.go:74] duration metric: took 172.816968ms to wait for pod list to return data ...
	I0127 15:24:46.557636 1056546 default_sa.go:34] waiting for default service account to be created ...
	I0127 15:24:46.756012 1056546 default_sa.go:45] found service account: "default"
	I0127 15:24:46.756045 1056546 default_sa.go:55] duration metric: took 198.402626ms for default service account to be created ...
	I0127 15:24:46.756058 1056546 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 15:24:46.957239 1056546 system_pods.go:87] 6 kube-system pods found
	I0127 15:24:47.155708 1056546 system_pods.go:105] "coredns-668d6bf9bc-2sw96" [b56c511d-a960-4b42-ad94-6ceb46987306] Running
	I0127 15:24:47.155737 1056546 system_pods.go:105] "etcd-pause-243834" [69834d77-38c2-4a99-8e13-15a40b35c51d] Running
	I0127 15:24:47.155745 1056546 system_pods.go:105] "kube-apiserver-pause-243834" [76222e3c-c2fa-4dd1-8459-ba69e9a592d1] Running
	I0127 15:24:47.155762 1056546 system_pods.go:105] "kube-controller-manager-pause-243834" [b7f1126d-c75e-4251-97d7-c9273f82801a] Running
	I0127 15:24:47.155770 1056546 system_pods.go:105] "kube-proxy-68s7d" [6fd80540-1ea3-4591-8ddc-68031a7950f5] Running
	I0127 15:24:47.155780 1056546 system_pods.go:105] "kube-scheduler-pause-243834" [60a69e96-b7f7-49ca-a7d4-4065de7648a9] Running
	I0127 15:24:47.155791 1056546 system_pods.go:147] duration metric: took 399.724193ms to wait for k8s-apps to be running ...
	I0127 15:24:47.155802 1056546 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 15:24:47.155861 1056546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:24:47.176105 1056546 system_svc.go:56] duration metric: took 20.291275ms WaitForService to wait for kubelet
	I0127 15:24:47.176138 1056546 kubeadm.go:582] duration metric: took 3.570267149s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 15:24:47.176160 1056546 node_conditions.go:102] verifying NodePressure condition ...
	I0127 15:24:47.355875 1056546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 15:24:47.355913 1056546 node_conditions.go:123] node cpu capacity is 2
	I0127 15:24:47.355931 1056546 node_conditions.go:105] duration metric: took 179.764376ms to run NodePressure ...
	I0127 15:24:47.355950 1056546 start.go:241] waiting for startup goroutines ...
	I0127 15:24:47.355961 1056546 start.go:246] waiting for cluster config update ...
	I0127 15:24:47.355972 1056546 start.go:255] writing updated cluster config ...
	I0127 15:24:47.356358 1056546 ssh_runner.go:195] Run: rm -f paused
	I0127 15:24:47.429574 1056546 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 15:24:47.431692 1056546 out.go:177] * Done! kubectl is now configured to use "pause-243834" cluster and "default" namespace by default
	I0127 15:24:46.696318 1056775 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 15:24:47.155234 1056775 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 15:24:47.696832 1056775 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 15:24:47.697927 1056775 kubeadm.go:310] 
	I0127 15:24:47.698031 1056775 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 15:24:47.698042 1056775 kubeadm.go:310] 
	I0127 15:24:47.698169 1056775 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 15:24:47.698183 1056775 kubeadm.go:310] 
	I0127 15:24:47.698218 1056775 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 15:24:47.698297 1056775 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 15:24:47.698368 1056775 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 15:24:47.698377 1056775 kubeadm.go:310] 
	I0127 15:24:47.698447 1056775 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 15:24:47.698458 1056775 kubeadm.go:310] 
	I0127 15:24:47.698516 1056775 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 15:24:47.698529 1056775 kubeadm.go:310] 
	I0127 15:24:47.698591 1056775 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 15:24:47.698690 1056775 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 15:24:47.698794 1056775 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 15:24:47.698821 1056775 kubeadm.go:310] 
	I0127 15:24:47.698952 1056775 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 15:24:47.699055 1056775 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 15:24:47.699066 1056775 kubeadm.go:310] 
	I0127 15:24:47.699210 1056775 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4uv25w.n4lqo3b4zl4f8en4 \
	I0127 15:24:47.699359 1056775 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 \
	I0127 15:24:47.699398 1056775 kubeadm.go:310] 	--control-plane 
	I0127 15:24:47.699406 1056775 kubeadm.go:310] 
	I0127 15:24:47.699522 1056775 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 15:24:47.699536 1056775 kubeadm.go:310] 
	I0127 15:24:47.699663 1056775 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4uv25w.n4lqo3b4zl4f8en4 \
	I0127 15:24:47.699798 1056775 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 
	I0127 15:24:47.700718 1056775 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:24:47.700775 1056775 cni.go:84] Creating CNI manager for ""
	I0127 15:24:47.700795 1056775 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:24:47.702592 1056775 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.243190503Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991488243164213,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25e62c49-2c9f-4426-9290-0267511ab829 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.248664363Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=151f4a05-d9f3-4c60-b35a-c5e989572acc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.248769705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=151f4a05-d9f3-4c60-b35a-c5e989572acc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.249253622Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1dc7e6d4ce65dc3af5ab9b5256f985f7b59d0dd78898031f523b9c38eae8998a,PodSandboxId:4498330b815f38f4924304e03ce165bf97d46798ac49ee25d46a1ac867005d80,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737991464286153603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455853b9e6988d7f5a48c925528f7b36,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5273e489a267e690f9f66f6a9d37d011bc370eb9edf6c3b5261bb22fd9d98812,PodSandboxId:7b7eed55ce7d62f9ee1783f6ecdc025bb47d405cef2e9b880ae8ced683386bfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737991464314067809,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfb571697f45d5a3fb498b566cd6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b14e81d6210610b8d4f43321e3855e4e2bf51896665160c41b7285d2fb072e1f,PodSandboxId:932a3e1dd3ecdfe85f46b466d8c857dd880230dae171e0b1d78ddd2495bc2920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737991464299492136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7c2fc9eb742e6b85bbdfc4cd020f40,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5178d1059f06d5864d26e7d03fe5b3a9a692aae40fef9ad1c678d05a85a012,PodSandboxId:f4da7a2c76882d7c20942e5ac8c32a0595f63e69105917afaa44d59b2951d240,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737991464315154053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed52e73a7b6063c3bf1eda649f9d39a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdec3e290051cd9cd63ef734ecc0a62fed937561b40b47da5402d5223fa140d2,PodSandboxId:aad255f9acf6e432b983efb5dd8b6908e55e004653403f33799b218d69970ee3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737991451103395985,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68s7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd80540-1ea3-4591-8ddc-68031a7950f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fba0ec384a4d394fe02da5edc0c7a74c97ba8735939fbda38561e2adac3ac0b,PodSandboxId:f7c2b3cdba583f8368ca35c96d1e63d67902587db9e0fdc2ddba6ef181c05f7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737991445962559354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2sw96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56c511d-a960-4b42-ad94-6ceb46987306,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:457ca2d324e1d626170e1d26b5f6816656542a8251ac9df739edb03caadf29c8,PodSandboxId:932a3e1dd3ecdfe85f46b466d8c857dd880230dae171e0b1d78ddd2495bc2920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737991441110155498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7c2fc9eb742e6b85bbdfc4cd020f40,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:813ee4bb11a157a86c4578d8422591b7414cbb4898620e10f3232b221d2a1c36,PodSandboxId:4498330b815f38f4924304e03ce165bf97d46798ac49ee25d46a1ac867005d80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737991440823968842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455853b9e6988d7f5a48c925528f7b36,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51148737687a2de3ed760a908b038c26b2e3bea7f1a4ffc43691d252c9cc1cd,PodSandboxId:f4da7a2c76882d7c20942e5ac8c32a0595f63e69105917afaa44d59b2951d240,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737991440745692442,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed52e73a7b6063c3bf1eda649f9d39a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4541fde6a62bdf6bb80c3d9a522cbe540ab288daf14ab74f7cce7dadf5ea27,PodSandboxId:7b7eed55ce7d62f9ee1783f6ecdc025bb47d405cef2e9b880ae8ced683386bfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737991440756654896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfb571697f45d5a3fb498b566cd6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94cbaf412489c9f15fea5d0b062c484ad3101c587a53e54935e0ed3f8af7bae,PodSandboxId:b0fda6759f2999807245d8de62b1153a4d47fa3d2f86fd98b7bf13423a08d160,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737991416614825826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2sw96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56c511d-a960-4b42-ad94-6ceb46987306,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402e01205daccc10373226dbbb2f7f33af08c430590dd765f6de941bdd00cbd6,PodSandboxId:49e2b2e271e854cd2316790f1dce382d637fce5d39620f44b5d5dfb82be11bdb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737991415846406421,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68s7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 6fd80540-1ea3-4591-8ddc-68031a7950f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=151f4a05-d9f3-4c60-b35a-c5e989572acc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.303487357Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=079df47c-34b9-411a-9aa6-9fee49ecaa2a name=/runtime.v1.RuntimeService/Version
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.303611788Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=079df47c-34b9-411a-9aa6-9fee49ecaa2a name=/runtime.v1.RuntimeService/Version
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.305439808Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c798216-36a3-4937-bfb0-cc2db726af95 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.306122323Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991488306085846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c798216-36a3-4937-bfb0-cc2db726af95 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.306724180Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03b4f472-d2dd-4274-9407-44b5f82e85a6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.306834762Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03b4f472-d2dd-4274-9407-44b5f82e85a6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.307241507Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1dc7e6d4ce65dc3af5ab9b5256f985f7b59d0dd78898031f523b9c38eae8998a,PodSandboxId:4498330b815f38f4924304e03ce165bf97d46798ac49ee25d46a1ac867005d80,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737991464286153603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455853b9e6988d7f5a48c925528f7b36,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5273e489a267e690f9f66f6a9d37d011bc370eb9edf6c3b5261bb22fd9d98812,PodSandboxId:7b7eed55ce7d62f9ee1783f6ecdc025bb47d405cef2e9b880ae8ced683386bfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737991464314067809,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfb571697f45d5a3fb498b566cd6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b14e81d6210610b8d4f43321e3855e4e2bf51896665160c41b7285d2fb072e1f,PodSandboxId:932a3e1dd3ecdfe85f46b466d8c857dd880230dae171e0b1d78ddd2495bc2920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737991464299492136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7c2fc9eb742e6b85bbdfc4cd020f40,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5178d1059f06d5864d26e7d03fe5b3a9a692aae40fef9ad1c678d05a85a012,PodSandboxId:f4da7a2c76882d7c20942e5ac8c32a0595f63e69105917afaa44d59b2951d240,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737991464315154053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed52e73a7b6063c3bf1eda649f9d39a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdec3e290051cd9cd63ef734ecc0a62fed937561b40b47da5402d5223fa140d2,PodSandboxId:aad255f9acf6e432b983efb5dd8b6908e55e004653403f33799b218d69970ee3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737991451103395985,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68s7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd80540-1ea3-4591-8ddc-68031a7950f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fba0ec384a4d394fe02da5edc0c7a74c97ba8735939fbda38561e2adac3ac0b,PodSandboxId:f7c2b3cdba583f8368ca35c96d1e63d67902587db9e0fdc2ddba6ef181c05f7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737991445962559354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2sw96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56c511d-a960-4b42-ad94-6ceb46987306,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:457ca2d324e1d626170e1d26b5f6816656542a8251ac9df739edb03caadf29c8,PodSandboxId:932a3e1dd3ecdfe85f46b466d8c857dd880230dae171e0b1d78ddd2495bc2920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737991441110155498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7c2fc9eb742e6b85bbdfc4cd020f40,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:813ee4bb11a157a86c4578d8422591b7414cbb4898620e10f3232b221d2a1c36,PodSandboxId:4498330b815f38f4924304e03ce165bf97d46798ac49ee25d46a1ac867005d80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737991440823968842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455853b9e6988d7f5a48c925528f7b36,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51148737687a2de3ed760a908b038c26b2e3bea7f1a4ffc43691d252c9cc1cd,PodSandboxId:f4da7a2c76882d7c20942e5ac8c32a0595f63e69105917afaa44d59b2951d240,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737991440745692442,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed52e73a7b6063c3bf1eda649f9d39a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4541fde6a62bdf6bb80c3d9a522cbe540ab288daf14ab74f7cce7dadf5ea27,PodSandboxId:7b7eed55ce7d62f9ee1783f6ecdc025bb47d405cef2e9b880ae8ced683386bfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737991440756654896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfb571697f45d5a3fb498b566cd6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94cbaf412489c9f15fea5d0b062c484ad3101c587a53e54935e0ed3f8af7bae,PodSandboxId:b0fda6759f2999807245d8de62b1153a4d47fa3d2f86fd98b7bf13423a08d160,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737991416614825826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2sw96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56c511d-a960-4b42-ad94-6ceb46987306,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402e01205daccc10373226dbbb2f7f33af08c430590dd765f6de941bdd00cbd6,PodSandboxId:49e2b2e271e854cd2316790f1dce382d637fce5d39620f44b5d5dfb82be11bdb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737991415846406421,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68s7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 6fd80540-1ea3-4591-8ddc-68031a7950f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03b4f472-d2dd-4274-9407-44b5f82e85a6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.358075554Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=718c8654-a8c4-48f1-945d-ab03d7130b9b name=/runtime.v1.RuntimeService/Version
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.358204336Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=718c8654-a8c4-48f1-945d-ab03d7130b9b name=/runtime.v1.RuntimeService/Version
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.359551350Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=915c6f6f-1501-444d-b4f5-62abd0b7c490 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.360517008Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991488360479640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=915c6f6f-1501-444d-b4f5-62abd0b7c490 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.361335785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf2f5d05-e73d-4a6b-812e-271bc7af4154 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.361440308Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf2f5d05-e73d-4a6b-812e-271bc7af4154 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.362085263Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1dc7e6d4ce65dc3af5ab9b5256f985f7b59d0dd78898031f523b9c38eae8998a,PodSandboxId:4498330b815f38f4924304e03ce165bf97d46798ac49ee25d46a1ac867005d80,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737991464286153603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455853b9e6988d7f5a48c925528f7b36,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5273e489a267e690f9f66f6a9d37d011bc370eb9edf6c3b5261bb22fd9d98812,PodSandboxId:7b7eed55ce7d62f9ee1783f6ecdc025bb47d405cef2e9b880ae8ced683386bfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737991464314067809,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfb571697f45d5a3fb498b566cd6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b14e81d6210610b8d4f43321e3855e4e2bf51896665160c41b7285d2fb072e1f,PodSandboxId:932a3e1dd3ecdfe85f46b466d8c857dd880230dae171e0b1d78ddd2495bc2920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737991464299492136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7c2fc9eb742e6b85bbdfc4cd020f40,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5178d1059f06d5864d26e7d03fe5b3a9a692aae40fef9ad1c678d05a85a012,PodSandboxId:f4da7a2c76882d7c20942e5ac8c32a0595f63e69105917afaa44d59b2951d240,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737991464315154053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed52e73a7b6063c3bf1eda649f9d39a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdec3e290051cd9cd63ef734ecc0a62fed937561b40b47da5402d5223fa140d2,PodSandboxId:aad255f9acf6e432b983efb5dd8b6908e55e004653403f33799b218d69970ee3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737991451103395985,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68s7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd80540-1ea3-4591-8ddc-68031a7950f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fba0ec384a4d394fe02da5edc0c7a74c97ba8735939fbda38561e2adac3ac0b,PodSandboxId:f7c2b3cdba583f8368ca35c96d1e63d67902587db9e0fdc2ddba6ef181c05f7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737991445962559354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2sw96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56c511d-a960-4b42-ad94-6ceb46987306,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:457ca2d324e1d626170e1d26b5f6816656542a8251ac9df739edb03caadf29c8,PodSandboxId:932a3e1dd3ecdfe85f46b466d8c857dd880230dae171e0b1d78ddd2495bc2920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737991441110155498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7c2fc9eb742e6b85bbdfc4cd020f40,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:813ee4bb11a157a86c4578d8422591b7414cbb4898620e10f3232b221d2a1c36,PodSandboxId:4498330b815f38f4924304e03ce165bf97d46798ac49ee25d46a1ac867005d80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737991440823968842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455853b9e6988d7f5a48c925528f7b36,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51148737687a2de3ed760a908b038c26b2e3bea7f1a4ffc43691d252c9cc1cd,PodSandboxId:f4da7a2c76882d7c20942e5ac8c32a0595f63e69105917afaa44d59b2951d240,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737991440745692442,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed52e73a7b6063c3bf1eda649f9d39a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4541fde6a62bdf6bb80c3d9a522cbe540ab288daf14ab74f7cce7dadf5ea27,PodSandboxId:7b7eed55ce7d62f9ee1783f6ecdc025bb47d405cef2e9b880ae8ced683386bfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737991440756654896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfb571697f45d5a3fb498b566cd6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94cbaf412489c9f15fea5d0b062c484ad3101c587a53e54935e0ed3f8af7bae,PodSandboxId:b0fda6759f2999807245d8de62b1153a4d47fa3d2f86fd98b7bf13423a08d160,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737991416614825826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2sw96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56c511d-a960-4b42-ad94-6ceb46987306,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402e01205daccc10373226dbbb2f7f33af08c430590dd765f6de941bdd00cbd6,PodSandboxId:49e2b2e271e854cd2316790f1dce382d637fce5d39620f44b5d5dfb82be11bdb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737991415846406421,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68s7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 6fd80540-1ea3-4591-8ddc-68031a7950f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf2f5d05-e73d-4a6b-812e-271bc7af4154 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.421652906Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=94c7bf11-adf4-4e18-87d7-d35f11fae547 name=/runtime.v1.RuntimeService/Version
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.421819681Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=94c7bf11-adf4-4e18-87d7-d35f11fae547 name=/runtime.v1.RuntimeService/Version
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.423245323Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=69f91fe8-1885-4a3d-8795-966b082f511b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.423770818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991488423736334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69f91fe8-1885-4a3d-8795-966b082f511b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.424525626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=934383a9-675b-40d6-8593-bac7d995195a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.424616631Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=934383a9-675b-40d6-8593-bac7d995195a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:48 pause-243834 crio[2470]: time="2025-01-27 15:24:48.425148780Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1dc7e6d4ce65dc3af5ab9b5256f985f7b59d0dd78898031f523b9c38eae8998a,PodSandboxId:4498330b815f38f4924304e03ce165bf97d46798ac49ee25d46a1ac867005d80,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737991464286153603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455853b9e6988d7f5a48c925528f7b36,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5273e489a267e690f9f66f6a9d37d011bc370eb9edf6c3b5261bb22fd9d98812,PodSandboxId:7b7eed55ce7d62f9ee1783f6ecdc025bb47d405cef2e9b880ae8ced683386bfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737991464314067809,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfb571697f45d5a3fb498b566cd6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b14e81d6210610b8d4f43321e3855e4e2bf51896665160c41b7285d2fb072e1f,PodSandboxId:932a3e1dd3ecdfe85f46b466d8c857dd880230dae171e0b1d78ddd2495bc2920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737991464299492136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7c2fc9eb742e6b85bbdfc4cd020f40,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5178d1059f06d5864d26e7d03fe5b3a9a692aae40fef9ad1c678d05a85a012,PodSandboxId:f4da7a2c76882d7c20942e5ac8c32a0595f63e69105917afaa44d59b2951d240,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737991464315154053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed52e73a7b6063c3bf1eda649f9d39a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdec3e290051cd9cd63ef734ecc0a62fed937561b40b47da5402d5223fa140d2,PodSandboxId:aad255f9acf6e432b983efb5dd8b6908e55e004653403f33799b218d69970ee3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737991451103395985,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68s7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd80540-1ea3-4591-8ddc-68031a7950f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fba0ec384a4d394fe02da5edc0c7a74c97ba8735939fbda38561e2adac3ac0b,PodSandboxId:f7c2b3cdba583f8368ca35c96d1e63d67902587db9e0fdc2ddba6ef181c05f7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737991445962559354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2sw96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56c511d-a960-4b42-ad94-6ceb46987306,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:457ca2d324e1d626170e1d26b5f6816656542a8251ac9df739edb03caadf29c8,PodSandboxId:932a3e1dd3ecdfe85f46b466d8c857dd880230dae171e0b1d78ddd2495bc2920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737991441110155498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7c2fc9eb742e6b85bbdfc4cd020f40,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:813ee4bb11a157a86c4578d8422591b7414cbb4898620e10f3232b221d2a1c36,PodSandboxId:4498330b815f38f4924304e03ce165bf97d46798ac49ee25d46a1ac867005d80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737991440823968842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455853b9e6988d7f5a48c925528f7b36,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51148737687a2de3ed760a908b038c26b2e3bea7f1a4ffc43691d252c9cc1cd,PodSandboxId:f4da7a2c76882d7c20942e5ac8c32a0595f63e69105917afaa44d59b2951d240,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737991440745692442,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed52e73a7b6063c3bf1eda649f9d39a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4541fde6a62bdf6bb80c3d9a522cbe540ab288daf14ab74f7cce7dadf5ea27,PodSandboxId:7b7eed55ce7d62f9ee1783f6ecdc025bb47d405cef2e9b880ae8ced683386bfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737991440756654896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfb571697f45d5a3fb498b566cd6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94cbaf412489c9f15fea5d0b062c484ad3101c587a53e54935e0ed3f8af7bae,PodSandboxId:b0fda6759f2999807245d8de62b1153a4d47fa3d2f86fd98b7bf13423a08d160,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737991416614825826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2sw96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56c511d-a960-4b42-ad94-6ceb46987306,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402e01205daccc10373226dbbb2f7f33af08c430590dd765f6de941bdd00cbd6,PodSandboxId:49e2b2e271e854cd2316790f1dce382d637fce5d39620f44b5d5dfb82be11bdb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737991415846406421,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68s7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 6fd80540-1ea3-4591-8ddc-68031a7950f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=934383a9-675b-40d6-8593-bac7d995195a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	cf5178d1059f0       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   24 seconds ago       Running             kube-apiserver            2                   f4da7a2c76882       kube-apiserver-pause-243834
	5273e489a267e       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   24 seconds ago       Running             kube-scheduler            2                   7b7eed55ce7d6       kube-scheduler-pause-243834
	b14e81d621061       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   24 seconds ago       Running             kube-controller-manager   2                   932a3e1dd3ecd       kube-controller-manager-pause-243834
	1dc7e6d4ce65d       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   24 seconds ago       Running             etcd                      2                   4498330b815f3       etcd-pause-243834
	cdec3e290051c       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   37 seconds ago       Running             kube-proxy                1                   aad255f9acf6e       kube-proxy-68s7d
	7fba0ec384a4d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   42 seconds ago       Running             coredns                   1                   f7c2b3cdba583       coredns-668d6bf9bc-2sw96
	457ca2d324e1d       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   47 seconds ago       Exited              kube-controller-manager   1                   932a3e1dd3ecd       kube-controller-manager-pause-243834
	813ee4bb11a15       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   47 seconds ago       Exited              etcd                      1                   4498330b815f3       etcd-pause-243834
	8b4541fde6a62       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   47 seconds ago       Exited              kube-scheduler            1                   7b7eed55ce7d6       kube-scheduler-pause-243834
	f51148737687a       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   47 seconds ago       Exited              kube-apiserver            1                   f4da7a2c76882       kube-apiserver-pause-243834
	c94cbaf412489       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   b0fda6759f299       coredns-668d6bf9bc-2sw96
	402e01205dacc       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   About a minute ago   Exited              kube-proxy                0                   49e2b2e271e85       kube-proxy-68s7d
	
	
	==> coredns [7fba0ec384a4d394fe02da5edc0c7a74c97ba8735939fbda38561e2adac3ac0b] <==
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46923 - 20357 "HINFO IN 1331876287313085191.4781384439551221230. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.05676906s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1842266786]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 15:24:06.068) (total time: 10001ms):
	Trace[1842266786]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (15:24:16.069)
	Trace[1842266786]: [10.001035599s] [10.001035599s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[275836624]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 15:24:06.069) (total time: 10000ms):
	Trace[275836624]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (15:24:16.070)
	Trace[275836624]: [10.000623065s] [10.000623065s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1371882890]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 15:24:06.069) (total time: 10000ms):
	Trace[1371882890]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (15:24:16.070)
	Trace[1371882890]: [10.000932682s] [10.000932682s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47766->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47766->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47796->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47796->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47782->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47782->10.96.0.1:443: read: connection reset by peer
	
	
	==> coredns [c94cbaf412489c9f15fea5d0b062c484ad3101c587a53e54935e0ed3f8af7bae] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-243834
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-243834
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743
	                    minikube.k8s.io/name=pause-243834
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T15_23_30_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 15:23:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-243834
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 15:24:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 15:24:27 +0000   Mon, 27 Jan 2025 15:23:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 15:24:27 +0000   Mon, 27 Jan 2025 15:23:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 15:24:27 +0000   Mon, 27 Jan 2025 15:23:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 15:24:27 +0000   Mon, 27 Jan 2025 15:23:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.18
	  Hostname:    pause-243834
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 b52e2c61abcc4d77b8d17f099e23e0c6
	  System UUID:                b52e2c61-abcc-4d77-b8d1-7f099e23e0c6
	  Boot ID:                    721cc1d7-4efb-43c0-b15e-d756f436fb30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-2sw96                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     74s
	  kube-system                 etcd-pause-243834                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         78s
	  kube-system                 kube-apiserver-pause-243834             250m (12%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-controller-manager-pause-243834    200m (10%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-proxy-68s7d                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-scheduler-pause-243834             100m (5%)     0 (0%)      0 (0%)           0 (0%)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 71s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  Starting                 79s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     78s                kubelet          Node pause-243834 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  78s                kubelet          Node pause-243834 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s                kubelet          Node pause-243834 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                77s                kubelet          Node pause-243834 status is now: NodeReady
	  Normal  RegisteredNode           75s                node-controller  Node pause-243834 event: Registered Node pause-243834 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-243834 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-243834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-243834 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                node-controller  Node pause-243834 event: Registered Node pause-243834 in Controller
	
	
	==> dmesg <==
	[  +0.198158] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.165980] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.326965] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +4.836442] systemd-fstab-generator[740]: Ignoring "noauto" option for root device
	[  +0.068415] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.324878] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +1.203249] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.376162] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.092382] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.145829] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.300704] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[ +11.766184] kauditd_printk_skb: 92 callbacks suppressed
	[ +11.265319] systemd-fstab-generator[2295]: Ignoring "noauto" option for root device
	[  +0.082966] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.072948] systemd-fstab-generator[2307]: Ignoring "noauto" option for root device
	[  +0.198221] systemd-fstab-generator[2321]: Ignoring "noauto" option for root device
	[  +0.154819] systemd-fstab-generator[2333]: Ignoring "noauto" option for root device
	[  +0.343108] systemd-fstab-generator[2361]: Ignoring "noauto" option for root device
	[  +1.266524] systemd-fstab-generator[2574]: Ignoring "noauto" option for root device
	[Jan27 15:24] kauditd_printk_skb: 180 callbacks suppressed
	[  +5.286409] kauditd_printk_skb: 7 callbacks suppressed
	[ +12.320898] systemd-fstab-generator[3347]: Ignoring "noauto" option for root device
	[  +0.082183] kauditd_printk_skb: 1 callbacks suppressed
	[  +7.322541] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.904553] systemd-fstab-generator[3714]: Ignoring "noauto" option for root device
	
	
	==> etcd [1dc7e6d4ce65dc3af5ab9b5256f985f7b59d0dd78898031f523b9c38eae8998a] <==
	{"level":"info","ts":"2025-01-27T15:24:24.689944Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.72.18:2380"}
	{"level":"info","ts":"2025-01-27T15:24:25.971247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b54c677330ae1f4 is starting a new election at term 2"}
	{"level":"info","ts":"2025-01-27T15:24:25.971358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b54c677330ae1f4 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-27T15:24:25.971388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b54c677330ae1f4 received MsgPreVoteResp from 4b54c677330ae1f4 at term 2"}
	{"level":"info","ts":"2025-01-27T15:24:25.971411Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b54c677330ae1f4 became candidate at term 3"}
	{"level":"info","ts":"2025-01-27T15:24:25.971440Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b54c677330ae1f4 received MsgVoteResp from 4b54c677330ae1f4 at term 3"}
	{"level":"info","ts":"2025-01-27T15:24:25.971459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b54c677330ae1f4 became leader at term 3"}
	{"level":"info","ts":"2025-01-27T15:24:25.971477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4b54c677330ae1f4 elected leader 4b54c677330ae1f4 at term 3"}
	{"level":"info","ts":"2025-01-27T15:24:25.977806Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T15:24:25.978652Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T15:24:25.977765Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"4b54c677330ae1f4","local-member-attributes":"{Name:pause-243834 ClientURLs:[https://192.168.72.18:2379]}","request-path":"/0/members/4b54c677330ae1f4/attributes","cluster-id":"2f920756fb341899","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T15:24:25.979346Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T15:24:25.979627Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T15:24:25.979665Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T15:24:25.979720Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.18:2379"}
	{"level":"info","ts":"2025-01-27T15:24:25.980049Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T15:24:25.980628Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T15:24:36.688247Z","caller":"traceutil/trace.go:171","msg":"trace[1390294891] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"138.948287ms","start":"2025-01-27T15:24:36.549285Z","end":"2025-01-27T15:24:36.688233Z","steps":["trace[1390294891] 'process raft request'  (duration: 138.863581ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T15:24:37.239439Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"261.132054ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16281802003689431088 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-243834\" mod_revision:456 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-243834\" value_size:6721 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-243834\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-27T15:24:37.240107Z","caller":"traceutil/trace.go:171","msg":"trace[1895679455] linearizableReadLoop","detail":"{readStateIndex:493; appliedIndex:492; }","duration":"193.996578ms","start":"2025-01-27T15:24:37.046098Z","end":"2025-01-27T15:24:37.240095Z","steps":["trace[1895679455] 'read index received'  (duration: 24.502µs)","trace[1895679455] 'applied index is now lower than readState.Index'  (duration: 193.971121ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T15:24:37.240228Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.11737ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-243834\" limit:1 ","response":"range_response_count:1 size:5836"}
	{"level":"info","ts":"2025-01-27T15:24:37.240264Z","caller":"traceutil/trace.go:171","msg":"trace[1602746611] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-243834; range_end:; response_count:1; response_revision:457; }","duration":"194.183033ms","start":"2025-01-27T15:24:37.046074Z","end":"2025-01-27T15:24:37.240257Z","steps":["trace[1602746611] 'agreement among raft nodes before linearized reading'  (duration: 194.106466ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T15:24:37.240387Z","caller":"traceutil/trace.go:171","msg":"trace[711455383] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"535.909223ms","start":"2025-01-27T15:24:36.704472Z","end":"2025-01-27T15:24:37.240381Z","steps":["trace[711455383] 'process raft request'  (duration: 273.50864ms)","trace[711455383] 'compare'  (duration: 260.768105ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T15:24:37.240462Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T15:24:36.704450Z","time spent":"535.978886ms","remote":"127.0.0.1:46890","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6783,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-243834\" mod_revision:456 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-243834\" value_size:6721 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-243834\" > >"}
	{"level":"info","ts":"2025-01-27T15:24:37.625635Z","caller":"traceutil/trace.go:171","msg":"trace[1341642787] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"122.202932ms","start":"2025-01-27T15:24:37.503413Z","end":"2025-01-27T15:24:37.625616Z","steps":["trace[1341642787] 'process raft request'  (duration: 84.83988ms)","trace[1341642787] 'compare'  (duration: 37.05565ms)"],"step_count":2}
	
	
	==> etcd [813ee4bb11a157a86c4578d8422591b7414cbb4898620e10f3232b221d2a1c36] <==
	{"level":"info","ts":"2025-01-27T15:24:01.236128Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-01-27T15:24:01.287398Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"2f920756fb341899","local-member-id":"4b54c677330ae1f4","commit-index":403}
	{"level":"info","ts":"2025-01-27T15:24:01.288481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b54c677330ae1f4 switched to configuration voters=()"}
	{"level":"info","ts":"2025-01-27T15:24:01.289178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b54c677330ae1f4 became follower at term 2"}
	{"level":"info","ts":"2025-01-27T15:24:01.289495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 4b54c677330ae1f4 [peers: [], term: 2, commit: 403, applied: 0, lastindex: 403, lastterm: 2]"}
	{"level":"warn","ts":"2025-01-27T15:24:01.297255Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-01-27T15:24:01.335539Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":389}
	{"level":"info","ts":"2025-01-27T15:24:01.348984Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-01-27T15:24:01.378414Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"4b54c677330ae1f4","timeout":"7s"}
	{"level":"info","ts":"2025-01-27T15:24:01.378729Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"4b54c677330ae1f4"}
	{"level":"info","ts":"2025-01-27T15:24:01.378776Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"4b54c677330ae1f4","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-01-27T15:24:01.379296Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T15:24:01.406525Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-27T15:24:01.406822Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"4b54c677330ae1f4","initial-advertise-peer-urls":["https://192.168.72.18:2380"],"listen-peer-urls":["https://192.168.72.18:2380"],"advertise-client-urls":["https://192.168.72.18:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.18:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-27T15:24:01.406847Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T15:24:01.407278Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-01-27T15:24:01.416526Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.72.18:2380"}
	{"level":"info","ts":"2025-01-27T15:24:01.416547Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.72.18:2380"}
	{"level":"info","ts":"2025-01-27T15:24:01.416559Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T15:24:01.416613Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T15:24:01.416621Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T15:24:01.417010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b54c677330ae1f4 switched to configuration voters=(5428181666148049396)"}
	{"level":"info","ts":"2025-01-27T15:24:01.417069Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2f920756fb341899","local-member-id":"4b54c677330ae1f4","added-peer-id":"4b54c677330ae1f4","added-peer-peer-urls":["https://192.168.72.18:2380"]}
	{"level":"info","ts":"2025-01-27T15:24:01.417199Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2f920756fb341899","local-member-id":"4b54c677330ae1f4","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T15:24:01.417251Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	
	
	==> kernel <==
	 15:24:48 up 1 min,  0 users,  load average: 0.93, 0.31, 0.11
	Linux pause-243834 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cf5178d1059f06d5864d26e7d03fe5b3a9a692aae40fef9ad1c678d05a85a012] <==
	I0127 15:24:27.257042       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0127 15:24:27.258048       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0127 15:24:27.268550       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 15:24:27.292592       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0127 15:24:27.303713       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 15:24:27.303775       1 policy_source.go:240] refreshing policies
	I0127 15:24:27.315259       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 15:24:27.324973       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0127 15:24:27.325090       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 15:24:27.325288       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0127 15:24:27.326792       1 aggregator.go:171] initial CRD sync complete...
	I0127 15:24:27.326830       1 autoregister_controller.go:144] Starting autoregister controller
	I0127 15:24:27.326837       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 15:24:27.326842       1 cache.go:39] Caches are synced for autoregister controller
	I0127 15:24:27.327071       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0127 15:24:27.369079       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0127 15:24:27.656542       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 15:24:28.159129       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 15:24:28.872164       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 15:24:28.921314       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 15:24:28.964458       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 15:24:28.975396       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 15:24:30.766030       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 15:24:30.916317       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 15:24:33.430133       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [f51148737687a2de3ed760a908b038c26b2e3bea7f1a4ffc43691d252c9cc1cd] <==
	I0127 15:24:01.254403       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0127 15:24:02.115041       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:02.115197       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0127 15:24:02.117990       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0127 15:24:02.128958       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 15:24:02.135754       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0127 15:24:02.138484       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0127 15:24:02.138832       1 instance.go:233] Using reconciler: lease
	W0127 15:24:02.139840       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:03.116083       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:03.116133       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:03.140563       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:04.434439       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:04.842128       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:05.002374       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:06.660755       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:07.317777       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:07.607804       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:10.765599       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:11.269148       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:11.616228       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:18.092581       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:18.189589       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:18.501186       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0127 15:24:22.140109       1 instance.go:226] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [457ca2d324e1d626170e1d26b5f6816656542a8251ac9df739edb03caadf29c8] <==
	
	
	==> kube-controller-manager [b14e81d6210610b8d4f43321e3855e4e2bf51896665160c41b7285d2fb072e1f] <==
	I0127 15:24:30.612742       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0127 15:24:30.612946       1 shared_informer.go:320] Caches are synced for disruption
	I0127 15:24:30.613125       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0127 15:24:30.613824       1 shared_informer.go:320] Caches are synced for GC
	I0127 15:24:30.613937       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 15:24:30.613992       1 shared_informer.go:320] Caches are synced for deployment
	I0127 15:24:30.616988       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 15:24:30.625435       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 15:24:30.632975       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 15:24:30.644192       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 15:24:30.646501       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 15:24:30.648760       1 shared_informer.go:320] Caches are synced for taint
	I0127 15:24:30.648852       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0127 15:24:30.648956       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-243834"
	I0127 15:24:30.648990       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0127 15:24:30.662656       1 shared_informer.go:320] Caches are synced for PV protection
	I0127 15:24:30.662939       1 shared_informer.go:320] Caches are synced for attach detach
	I0127 15:24:30.663031       1 shared_informer.go:320] Caches are synced for job
	I0127 15:24:30.663120       1 shared_informer.go:320] Caches are synced for daemon sets
	I0127 15:24:30.663261       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 15:24:30.669480       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 15:24:33.438183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="69.706586ms"
	I0127 15:24:33.471323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="32.996921ms"
	I0127 15:24:33.520443       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="49.015051ms"
	I0127 15:24:33.520600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="86.178µs"
	
	
	==> kube-proxy [402e01205daccc10373226dbbb2f7f33af08c430590dd765f6de941bdd00cbd6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 15:23:36.949468       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 15:23:36.980449       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.18"]
	E0127 15:23:36.981088       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 15:23:37.031562       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 15:23:37.031681       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 15:23:37.031745       1 server_linux.go:170] "Using iptables Proxier"
	I0127 15:23:37.035378       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 15:23:37.036159       1 server.go:497] "Version info" version="v1.32.1"
	I0127 15:23:37.036191       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 15:23:37.041555       1 config.go:199] "Starting service config controller"
	I0127 15:23:37.047269       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 15:23:37.047357       1 config.go:329] "Starting node config controller"
	I0127 15:23:37.047369       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 15:23:37.052036       1 config.go:105] "Starting endpoint slice config controller"
	I0127 15:23:37.052154       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 15:23:37.152699       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 15:23:37.152697       1 shared_informer.go:320] Caches are synced for node config
	I0127 15:23:37.152766       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [cdec3e290051cd9cd63ef734ecc0a62fed937561b40b47da5402d5223fa140d2] <==
	 >
	E0127 15:24:11.299591       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 15:24:21.303022       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-243834\": net/http: TLS handshake timeout"
	E0127 15:24:22.375044       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-243834\": dial tcp 192.168.72.18:8443: connect: connection refused"
	I0127 15:24:27.351565       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.18"]
	E0127 15:24:27.351669       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 15:24:27.440218       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 15:24:27.440325       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 15:24:27.440370       1 server_linux.go:170] "Using iptables Proxier"
	I0127 15:24:27.447863       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 15:24:27.448301       1 server.go:497] "Version info" version="v1.32.1"
	I0127 15:24:27.448502       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 15:24:27.450347       1 config.go:199] "Starting service config controller"
	I0127 15:24:27.457068       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 15:24:27.455626       1 config.go:105] "Starting endpoint slice config controller"
	I0127 15:24:27.457177       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 15:24:27.456365       1 config.go:329] "Starting node config controller"
	I0127 15:24:27.457185       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 15:24:27.557850       1 shared_informer.go:320] Caches are synced for node config
	I0127 15:24:27.557992       1 shared_informer.go:320] Caches are synced for service config
	I0127 15:24:27.558005       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5273e489a267e690f9f66f6a9d37d011bc370eb9edf6c3b5261bb22fd9d98812] <==
	I0127 15:24:25.308555       1 serving.go:386] Generated self-signed cert in-memory
	W0127 15:24:27.210252       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0127 15:24:27.210702       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 15:24:27.210773       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 15:24:27.210804       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 15:24:27.303172       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 15:24:27.303212       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 15:24:27.311232       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 15:24:27.311377       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 15:24:27.311411       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 15:24:27.311430       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 15:24:27.413040       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8b4541fde6a62bdf6bb80c3d9a522cbe540ab288daf14ab74f7cce7dadf5ea27] <==
	I0127 15:24:02.269551       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Jan 27 15:24:26 pause-243834 kubelet[3354]: E0127 15:24:26.771216    3354 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-243834\" not found" node="pause-243834"
	Jan 27 15:24:26 pause-243834 kubelet[3354]: E0127 15:24:26.771475    3354 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-243834\" not found" node="pause-243834"
	Jan 27 15:24:26 pause-243834 kubelet[3354]: E0127 15:24:26.771678    3354 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-243834\" not found" node="pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.415810    3354 kubelet_node_status.go:125] "Node was previously registered" node="pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.416145    3354 kubelet_node_status.go:79] "Successfully registered node" node="pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.416245    3354 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.418501    3354 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.424159    3354 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: E0127 15:24:27.479168    3354 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-243834\" already exists" pod="kube-system/etcd-pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.479258    3354 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: E0127 15:24:27.495105    3354 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-243834\" already exists" pod="kube-system/kube-apiserver-pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.495237    3354 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: E0127 15:24:27.511329    3354 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-243834\" already exists" pod="kube-system/kube-controller-manager-pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.511429    3354 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: E0127 15:24:27.527340    3354 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-243834\" already exists" pod="kube-system/kube-scheduler-pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.599048    3354 apiserver.go:52] "Watching apiserver"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.624248    3354 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.652590    3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fd80540-1ea3-4591-8ddc-68031a7950f5-xtables-lock\") pod \"kube-proxy-68s7d\" (UID: \"6fd80540-1ea3-4591-8ddc-68031a7950f5\") " pod="kube-system/kube-proxy-68s7d"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.653057    3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fd80540-1ea3-4591-8ddc-68031a7950f5-lib-modules\") pod \"kube-proxy-68s7d\" (UID: \"6fd80540-1ea3-4591-8ddc-68031a7950f5\") " pod="kube-system/kube-proxy-68s7d"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.774723    3354 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: E0127 15:24:27.784187    3354 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-243834\" already exists" pod="kube-system/kube-apiserver-pause-243834"
	Jan 27 15:24:33 pause-243834 kubelet[3354]: E0127 15:24:33.776855    3354 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991473776014740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 15:24:33 pause-243834 kubelet[3354]: E0127 15:24:33.777412    3354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991473776014740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 15:24:43 pause-243834 kubelet[3354]: E0127 15:24:43.779775    3354 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991483779564135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 15:24:43 pause-243834 kubelet[3354]: E0127 15:24:43.779799    3354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991483779564135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-243834 -n pause-243834
helpers_test.go:261: (dbg) Run:  kubectl --context pause-243834 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-243834 -n pause-243834
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-243834 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-243834 logs -n 25: (1.386983843s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-230388 sudo                  | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | cri-dockerd --version                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo                  | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo                  | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo cat              | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo cat              | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo                  | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo                  | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo                  | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo find             | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-230388 sudo crio             | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-230388                       | cilium-230388             | jenkins | v1.35.0 | 27 Jan 25 15:21 UTC | 27 Jan 25 15:21 UTC |
	| start   | -p running-upgrade-846704              | minikube                  | jenkins | v1.26.0 | 27 Jan 25 15:21 UTC | 27 Jan 25 15:23 UTC |
	|         | --memory=2200 --vm-driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p offline-crio-845871                 | offline-crio-845871       | jenkins | v1.35.0 | 27 Jan 25 15:22 UTC | 27 Jan 25 15:22 UTC |
	| start   | -p pause-243834 --memory=2048          | pause-243834              | jenkins | v1.35.0 | 27 Jan 25 15:22 UTC | 27 Jan 25 15:23 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-861726 stop            | minikube                  | jenkins | v1.26.0 | 27 Jan 25 15:22 UTC | 27 Jan 25 15:22 UTC |
	| start   | -p stopped-upgrade-861726              | stopped-upgrade-861726    | jenkins | v1.35.0 | 27 Jan 25 15:22 UTC | 27 Jan 25 15:23 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p running-upgrade-846704              | running-upgrade-846704    | jenkins | v1.35.0 | 27 Jan 25 15:23 UTC | 27 Jan 25 15:24 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-243834                        | pause-243834              | jenkins | v1.35.0 | 27 Jan 25 15:23 UTC | 27 Jan 25 15:24 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-861726              | stopped-upgrade-861726    | jenkins | v1.35.0 | 27 Jan 25 15:24 UTC | 27 Jan 25 15:24 UTC |
	| start   | -p force-systemd-flag-937953           | force-systemd-flag-937953 | jenkins | v1.35.0 | 27 Jan 25 15:24 UTC | 27 Jan 25 15:24 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-846704              | running-upgrade-846704    | jenkins | v1.35.0 | 27 Jan 25 15:24 UTC | 27 Jan 25 15:24 UTC |
	| start   | -p force-systemd-env-766957            | force-systemd-env-766957  | jenkins | v1.35.0 | 27 Jan 25 15:24 UTC |                     |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-937953 ssh cat      | force-systemd-flag-937953 | jenkins | v1.35.0 | 27 Jan 25 15:24 UTC | 27 Jan 25 15:24 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-937953           | force-systemd-flag-937953 | jenkins | v1.35.0 | 27 Jan 25 15:24 UTC | 27 Jan 25 15:24 UTC |
	| start   | -p cert-expiration-445777              | cert-expiration-445777    | jenkins | v1.35.0 | 27 Jan 25 15:24 UTC |                     |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 15:24:50
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 15:24:50.604530 1057630 out.go:345] Setting OutFile to fd 1 ...
	I0127 15:24:50.604645 1057630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:24:50.604650 1057630 out.go:358] Setting ErrFile to fd 2...
	I0127 15:24:50.604655 1057630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:24:50.604926 1057630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 15:24:50.605572 1057630 out.go:352] Setting JSON to false
	I0127 15:24:50.606662 1057630 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":22038,"bootTime":1737969453,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 15:24:50.606766 1057630 start.go:139] virtualization: kvm guest
	I0127 15:24:50.609478 1057630 out.go:177] * [cert-expiration-445777] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 15:24:50.611346 1057630 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 15:24:50.611352 1057630 notify.go:220] Checking for updates...
	I0127 15:24:50.614248 1057630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 15:24:50.615591 1057630 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:24:50.617014 1057630 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:24:50.618407 1057630 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 15:24:50.619673 1057630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> CRI-O <==
	Jan 27 15:24:50 pause-243834 crio[2470]: time="2025-01-27 15:24:50.925733121Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991490925707337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd79ec01-cf98-4ef3-84d1-75ba6f0981cb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:24:50 pause-243834 crio[2470]: time="2025-01-27 15:24:50.926525486Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef9eb861-e89e-465f-8df2-184ce0dcb579 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:50 pause-243834 crio[2470]: time="2025-01-27 15:24:50.926582574Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef9eb861-e89e-465f-8df2-184ce0dcb579 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:50 pause-243834 crio[2470]: time="2025-01-27 15:24:50.926848915Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1dc7e6d4ce65dc3af5ab9b5256f985f7b59d0dd78898031f523b9c38eae8998a,PodSandboxId:4498330b815f38f4924304e03ce165bf97d46798ac49ee25d46a1ac867005d80,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737991464286153603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455853b9e6988d7f5a48c925528f7b36,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5273e489a267e690f9f66f6a9d37d011bc370eb9edf6c3b5261bb22fd9d98812,PodSandboxId:7b7eed55ce7d62f9ee1783f6ecdc025bb47d405cef2e9b880ae8ced683386bfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737991464314067809,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfb571697f45d5a3fb498b566cd6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b14e81d6210610b8d4f43321e3855e4e2bf51896665160c41b7285d2fb072e1f,PodSandboxId:932a3e1dd3ecdfe85f46b466d8c857dd880230dae171e0b1d78ddd2495bc2920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737991464299492136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7c2fc9eb742e6b85bbdfc4cd020f40,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5178d1059f06d5864d26e7d03fe5b3a9a692aae40fef9ad1c678d05a85a012,PodSandboxId:f4da7a2c76882d7c20942e5ac8c32a0595f63e69105917afaa44d59b2951d240,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737991464315154053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed52e73a7b6063c3bf1eda649f9d39a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdec3e290051cd9cd63ef734ecc0a62fed937561b40b47da5402d5223fa140d2,PodSandboxId:aad255f9acf6e432b983efb5dd8b6908e55e004653403f33799b218d69970ee3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737991451103395985,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68s7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd80540-1ea3-4591-8ddc-68031a7950f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fba0ec384a4d394fe02da5edc0c7a74c97ba8735939fbda38561e2adac3ac0b,PodSandboxId:f7c2b3cdba583f8368ca35c96d1e63d67902587db9e0fdc2ddba6ef181c05f7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737991445962559354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2sw96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56c511d-a960-4b42-ad94-6ceb46987306,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:457ca2d324e1d626170e1d26b5f6816656542a8251ac9df739edb03caadf29c8,PodSandboxId:932a3e1dd3ecdfe85f46b466d8c857dd880230dae171e0b1d78ddd2495bc2920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737991441110155498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7c2fc9eb742e6b85bbdfc4cd020f40,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:813ee4bb11a157a86c4578d8422591b7414cbb4898620e10f3232b221d2a1c36,PodSandboxId:4498330b815f38f4924304e03ce165bf97d46798ac49ee25d46a1ac867005d80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737991440823968842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455853b9e6988d7f5a48c925528f7b36,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51148737687a2de3ed760a908b038c26b2e3bea7f1a4ffc43691d252c9cc1cd,PodSandboxId:f4da7a2c76882d7c20942e5ac8c32a0595f63e69105917afaa44d59b2951d240,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737991440745692442,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed52e73a7b6063c3bf1eda649f9d39a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4541fde6a62bdf6bb80c3d9a522cbe540ab288daf14ab74f7cce7dadf5ea27,PodSandboxId:7b7eed55ce7d62f9ee1783f6ecdc025bb47d405cef2e9b880ae8ced683386bfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737991440756654896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfb571697f45d5a3fb498b566cd6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94cbaf412489c9f15fea5d0b062c484ad3101c587a53e54935e0ed3f8af7bae,PodSandboxId:b0fda6759f2999807245d8de62b1153a4d47fa3d2f86fd98b7bf13423a08d160,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737991416614825826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2sw96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56c511d-a960-4b42-ad94-6ceb46987306,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402e01205daccc10373226dbbb2f7f33af08c430590dd765f6de941bdd00cbd6,PodSandboxId:49e2b2e271e854cd2316790f1dce382d637fce5d39620f44b5d5dfb82be11bdb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737991415846406421,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68s7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 6fd80540-1ea3-4591-8ddc-68031a7950f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef9eb861-e89e-465f-8df2-184ce0dcb579 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:50 pause-243834 crio[2470]: time="2025-01-27 15:24:50.971009915Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d050090-0925-4ea4-9bd1-7bb6de6af496 name=/runtime.v1.RuntimeService/Version
	Jan 27 15:24:50 pause-243834 crio[2470]: time="2025-01-27 15:24:50.971106019Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d050090-0925-4ea4-9bd1-7bb6de6af496 name=/runtime.v1.RuntimeService/Version
	Jan 27 15:24:50 pause-243834 crio[2470]: time="2025-01-27 15:24:50.972485483Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=daff574b-1475-4015-8f7f-dfc99328ce23 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:24:50 pause-243834 crio[2470]: time="2025-01-27 15:24:50.972855464Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991490972835704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=daff574b-1475-4015-8f7f-dfc99328ce23 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:24:50 pause-243834 crio[2470]: time="2025-01-27 15:24:50.974019744Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59f41600-9e2c-4d02-ab02-be69ef7d5d81 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:50 pause-243834 crio[2470]: time="2025-01-27 15:24:50.974180335Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59f41600-9e2c-4d02-ab02-be69ef7d5d81 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:50 pause-243834 crio[2470]: time="2025-01-27 15:24:50.974597923Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1dc7e6d4ce65dc3af5ab9b5256f985f7b59d0dd78898031f523b9c38eae8998a,PodSandboxId:4498330b815f38f4924304e03ce165bf97d46798ac49ee25d46a1ac867005d80,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737991464286153603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455853b9e6988d7f5a48c925528f7b36,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5273e489a267e690f9f66f6a9d37d011bc370eb9edf6c3b5261bb22fd9d98812,PodSandboxId:7b7eed55ce7d62f9ee1783f6ecdc025bb47d405cef2e9b880ae8ced683386bfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737991464314067809,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfb571697f45d5a3fb498b566cd6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b14e81d6210610b8d4f43321e3855e4e2bf51896665160c41b7285d2fb072e1f,PodSandboxId:932a3e1dd3ecdfe85f46b466d8c857dd880230dae171e0b1d78ddd2495bc2920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737991464299492136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7c2fc9eb742e6b85bbdfc4cd020f40,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5178d1059f06d5864d26e7d03fe5b3a9a692aae40fef9ad1c678d05a85a012,PodSandboxId:f4da7a2c76882d7c20942e5ac8c32a0595f63e69105917afaa44d59b2951d240,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737991464315154053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed52e73a7b6063c3bf1eda649f9d39a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdec3e290051cd9cd63ef734ecc0a62fed937561b40b47da5402d5223fa140d2,PodSandboxId:aad255f9acf6e432b983efb5dd8b6908e55e004653403f33799b218d69970ee3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737991451103395985,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68s7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd80540-1ea3-4591-8ddc-68031a7950f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fba0ec384a4d394fe02da5edc0c7a74c97ba8735939fbda38561e2adac3ac0b,PodSandboxId:f7c2b3cdba583f8368ca35c96d1e63d67902587db9e0fdc2ddba6ef181c05f7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737991445962559354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2sw96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56c511d-a960-4b42-ad94-6ceb46987306,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:457ca2d324e1d626170e1d26b5f6816656542a8251ac9df739edb03caadf29c8,PodSandboxId:932a3e1dd3ecdfe85f46b466d8c857dd880230dae171e0b1d78ddd2495bc2920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737991441110155498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7c2fc9eb742e6b85bbdfc4cd020f40,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:813ee4bb11a157a86c4578d8422591b7414cbb4898620e10f3232b221d2a1c36,PodSandboxId:4498330b815f38f4924304e03ce165bf97d46798ac49ee25d46a1ac867005d80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737991440823968842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455853b9e6988d7f5a48c925528f7b36,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51148737687a2de3ed760a908b038c26b2e3bea7f1a4ffc43691d252c9cc1cd,PodSandboxId:f4da7a2c76882d7c20942e5ac8c32a0595f63e69105917afaa44d59b2951d240,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737991440745692442,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed52e73a7b6063c3bf1eda649f9d39a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4541fde6a62bdf6bb80c3d9a522cbe540ab288daf14ab74f7cce7dadf5ea27,PodSandboxId:7b7eed55ce7d62f9ee1783f6ecdc025bb47d405cef2e9b880ae8ced683386bfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737991440756654896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfb571697f45d5a3fb498b566cd6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94cbaf412489c9f15fea5d0b062c484ad3101c587a53e54935e0ed3f8af7bae,PodSandboxId:b0fda6759f2999807245d8de62b1153a4d47fa3d2f86fd98b7bf13423a08d160,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737991416614825826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2sw96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56c511d-a960-4b42-ad94-6ceb46987306,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402e01205daccc10373226dbbb2f7f33af08c430590dd765f6de941bdd00cbd6,PodSandboxId:49e2b2e271e854cd2316790f1dce382d637fce5d39620f44b5d5dfb82be11bdb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737991415846406421,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68s7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 6fd80540-1ea3-4591-8ddc-68031a7950f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59f41600-9e2c-4d02-ab02-be69ef7d5d81 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:51 pause-243834 crio[2470]: time="2025-01-27 15:24:51.017233040Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d076c75-4b8c-4c62-ac64-7480ccb46f0a name=/runtime.v1.RuntimeService/Version
	Jan 27 15:24:51 pause-243834 crio[2470]: time="2025-01-27 15:24:51.017306111Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d076c75-4b8c-4c62-ac64-7480ccb46f0a name=/runtime.v1.RuntimeService/Version
	Jan 27 15:24:51 pause-243834 crio[2470]: time="2025-01-27 15:24:51.018300978Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e573983e-baa3-4e35-8a84-c7f032fdd258 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:24:51 pause-243834 crio[2470]: time="2025-01-27 15:24:51.018655627Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991491018631305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e573983e-baa3-4e35-8a84-c7f032fdd258 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:24:51 pause-243834 crio[2470]: time="2025-01-27 15:24:51.019126835Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=847716c9-fb34-470a-bebe-66334629af8c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:51 pause-243834 crio[2470]: time="2025-01-27 15:24:51.019181312Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=847716c9-fb34-470a-bebe-66334629af8c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:51 pause-243834 crio[2470]: time="2025-01-27 15:24:51.020276191Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1dc7e6d4ce65dc3af5ab9b5256f985f7b59d0dd78898031f523b9c38eae8998a,PodSandboxId:4498330b815f38f4924304e03ce165bf97d46798ac49ee25d46a1ac867005d80,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737991464286153603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455853b9e6988d7f5a48c925528f7b36,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5273e489a267e690f9f66f6a9d37d011bc370eb9edf6c3b5261bb22fd9d98812,PodSandboxId:7b7eed55ce7d62f9ee1783f6ecdc025bb47d405cef2e9b880ae8ced683386bfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737991464314067809,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfb571697f45d5a3fb498b566cd6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b14e81d6210610b8d4f43321e3855e4e2bf51896665160c41b7285d2fb072e1f,PodSandboxId:932a3e1dd3ecdfe85f46b466d8c857dd880230dae171e0b1d78ddd2495bc2920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737991464299492136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7c2fc9eb742e6b85bbdfc4cd020f40,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5178d1059f06d5864d26e7d03fe5b3a9a692aae40fef9ad1c678d05a85a012,PodSandboxId:f4da7a2c76882d7c20942e5ac8c32a0595f63e69105917afaa44d59b2951d240,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737991464315154053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed52e73a7b6063c3bf1eda649f9d39a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdec3e290051cd9cd63ef734ecc0a62fed937561b40b47da5402d5223fa140d2,PodSandboxId:aad255f9acf6e432b983efb5dd8b6908e55e004653403f33799b218d69970ee3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737991451103395985,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68s7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd80540-1ea3-4591-8ddc-68031a7950f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fba0ec384a4d394fe02da5edc0c7a74c97ba8735939fbda38561e2adac3ac0b,PodSandboxId:f7c2b3cdba583f8368ca35c96d1e63d67902587db9e0fdc2ddba6ef181c05f7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737991445962559354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2sw96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56c511d-a960-4b42-ad94-6ceb46987306,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:457ca2d324e1d626170e1d26b5f6816656542a8251ac9df739edb03caadf29c8,PodSandboxId:932a3e1dd3ecdfe85f46b466d8c857dd880230dae171e0b1d78ddd2495bc2920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737991441110155498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7c2fc9eb742e6b85bbdfc4cd020f40,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:813ee4bb11a157a86c4578d8422591b7414cbb4898620e10f3232b221d2a1c36,PodSandboxId:4498330b815f38f4924304e03ce165bf97d46798ac49ee25d46a1ac867005d80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737991440823968842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455853b9e6988d7f5a48c925528f7b36,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51148737687a2de3ed760a908b038c26b2e3bea7f1a4ffc43691d252c9cc1cd,PodSandboxId:f4da7a2c76882d7c20942e5ac8c32a0595f63e69105917afaa44d59b2951d240,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737991440745692442,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed52e73a7b6063c3bf1eda649f9d39a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4541fde6a62bdf6bb80c3d9a522cbe540ab288daf14ab74f7cce7dadf5ea27,PodSandboxId:7b7eed55ce7d62f9ee1783f6ecdc025bb47d405cef2e9b880ae8ced683386bfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737991440756654896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfb571697f45d5a3fb498b566cd6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94cbaf412489c9f15fea5d0b062c484ad3101c587a53e54935e0ed3f8af7bae,PodSandboxId:b0fda6759f2999807245d8de62b1153a4d47fa3d2f86fd98b7bf13423a08d160,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737991416614825826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2sw96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56c511d-a960-4b42-ad94-6ceb46987306,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402e01205daccc10373226dbbb2f7f33af08c430590dd765f6de941bdd00cbd6,PodSandboxId:49e2b2e271e854cd2316790f1dce382d637fce5d39620f44b5d5dfb82be11bdb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737991415846406421,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68s7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 6fd80540-1ea3-4591-8ddc-68031a7950f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=847716c9-fb34-470a-bebe-66334629af8c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:51 pause-243834 crio[2470]: time="2025-01-27 15:24:51.068609267Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=555c41e8-bbc6-4b20-8e22-667d01930b7d name=/runtime.v1.RuntimeService/Version
	Jan 27 15:24:51 pause-243834 crio[2470]: time="2025-01-27 15:24:51.068684403Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=555c41e8-bbc6-4b20-8e22-667d01930b7d name=/runtime.v1.RuntimeService/Version
	Jan 27 15:24:51 pause-243834 crio[2470]: time="2025-01-27 15:24:51.072111556Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a9f0d85-26f6-41c7-8303-6de7b7dedea8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:24:51 pause-243834 crio[2470]: time="2025-01-27 15:24:51.072478743Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991491072456385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a9f0d85-26f6-41c7-8303-6de7b7dedea8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:24:51 pause-243834 crio[2470]: time="2025-01-27 15:24:51.073145394Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da850d97-52d0-4972-94b3-ede6f1c1911b name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:51 pause-243834 crio[2470]: time="2025-01-27 15:24:51.073219997Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da850d97-52d0-4972-94b3-ede6f1c1911b name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:24:51 pause-243834 crio[2470]: time="2025-01-27 15:24:51.073475624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1dc7e6d4ce65dc3af5ab9b5256f985f7b59d0dd78898031f523b9c38eae8998a,PodSandboxId:4498330b815f38f4924304e03ce165bf97d46798ac49ee25d46a1ac867005d80,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737991464286153603,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455853b9e6988d7f5a48c925528f7b36,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5273e489a267e690f9f66f6a9d37d011bc370eb9edf6c3b5261bb22fd9d98812,PodSandboxId:7b7eed55ce7d62f9ee1783f6ecdc025bb47d405cef2e9b880ae8ced683386bfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737991464314067809,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfb571697f45d5a3fb498b566cd6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b14e81d6210610b8d4f43321e3855e4e2bf51896665160c41b7285d2fb072e1f,PodSandboxId:932a3e1dd3ecdfe85f46b466d8c857dd880230dae171e0b1d78ddd2495bc2920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737991464299492136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7c2fc9eb742e6b85bbdfc4cd020f40,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5178d1059f06d5864d26e7d03fe5b3a9a692aae40fef9ad1c678d05a85a012,PodSandboxId:f4da7a2c76882d7c20942e5ac8c32a0595f63e69105917afaa44d59b2951d240,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737991464315154053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed52e73a7b6063c3bf1eda649f9d39a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdec3e290051cd9cd63ef734ecc0a62fed937561b40b47da5402d5223fa140d2,PodSandboxId:aad255f9acf6e432b983efb5dd8b6908e55e004653403f33799b218d69970ee3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737991451103395985,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68s7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd80540-1ea3-4591-8ddc-68031a7950f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fba0ec384a4d394fe02da5edc0c7a74c97ba8735939fbda38561e2adac3ac0b,PodSandboxId:f7c2b3cdba583f8368ca35c96d1e63d67902587db9e0fdc2ddba6ef181c05f7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737991445962559354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2sw96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56c511d-a960-4b42-ad94-6ceb46987306,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:457ca2d324e1d626170e1d26b5f6816656542a8251ac9df739edb03caadf29c8,PodSandboxId:932a3e1dd3ecdfe85f46b466d8c857dd880230dae171e0b1d78ddd2495bc2920,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737991441110155498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f7c2fc9eb742e6b85bbdfc4cd020f40,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:813ee4bb11a157a86c4578d8422591b7414cbb4898620e10f3232b221d2a1c36,PodSandboxId:4498330b815f38f4924304e03ce165bf97d46798ac49ee25d46a1ac867005d80,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737991440823968842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 455853b9e6988d7f5a48c925528f7b36,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f51148737687a2de3ed760a908b038c26b2e3bea7f1a4ffc43691d252c9cc1cd,PodSandboxId:f4da7a2c76882d7c20942e5ac8c32a0595f63e69105917afaa44d59b2951d240,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737991440745692442,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed52e73a7b6063c3bf1eda649f9d39a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4541fde6a62bdf6bb80c3d9a522cbe540ab288daf14ab74f7cce7dadf5ea27,PodSandboxId:7b7eed55ce7d62f9ee1783f6ecdc025bb47d405cef2e9b880ae8ced683386bfe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737991440756654896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-243834,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41dfb571697f45d5a3fb498b566cd6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c94cbaf412489c9f15fea5d0b062c484ad3101c587a53e54935e0ed3f8af7bae,PodSandboxId:b0fda6759f2999807245d8de62b1153a4d47fa3d2f86fd98b7bf13423a08d160,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737991416614825826,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2sw96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b56c511d-a960-4b42-ad94-6ceb46987306,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402e01205daccc10373226dbbb2f7f33af08c430590dd765f6de941bdd00cbd6,PodSandboxId:49e2b2e271e854cd2316790f1dce382d637fce5d39620f44b5d5dfb82be11bdb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737991415846406421,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68s7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 6fd80540-1ea3-4591-8ddc-68031a7950f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da850d97-52d0-4972-94b3-ede6f1c1911b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	cf5178d1059f0       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   26 seconds ago       Running             kube-apiserver            2                   f4da7a2c76882       kube-apiserver-pause-243834
	5273e489a267e       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   26 seconds ago       Running             kube-scheduler            2                   7b7eed55ce7d6       kube-scheduler-pause-243834
	b14e81d621061       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   26 seconds ago       Running             kube-controller-manager   2                   932a3e1dd3ecd       kube-controller-manager-pause-243834
	1dc7e6d4ce65d       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   26 seconds ago       Running             etcd                      2                   4498330b815f3       etcd-pause-243834
	cdec3e290051c       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   40 seconds ago       Running             kube-proxy                1                   aad255f9acf6e       kube-proxy-68s7d
	7fba0ec384a4d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   45 seconds ago       Running             coredns                   1                   f7c2b3cdba583       coredns-668d6bf9bc-2sw96
	457ca2d324e1d       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   50 seconds ago       Exited              kube-controller-manager   1                   932a3e1dd3ecd       kube-controller-manager-pause-243834
	813ee4bb11a15       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   50 seconds ago       Exited              etcd                      1                   4498330b815f3       etcd-pause-243834
	8b4541fde6a62       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   50 seconds ago       Exited              kube-scheduler            1                   7b7eed55ce7d6       kube-scheduler-pause-243834
	f51148737687a       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   50 seconds ago       Exited              kube-apiserver            1                   f4da7a2c76882       kube-apiserver-pause-243834
	c94cbaf412489       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   b0fda6759f299       coredns-668d6bf9bc-2sw96
	402e01205dacc       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   About a minute ago   Exited              kube-proxy                0                   49e2b2e271e85       kube-proxy-68s7d
	
	
	==> coredns [7fba0ec384a4d394fe02da5edc0c7a74c97ba8735939fbda38561e2adac3ac0b] <==
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46923 - 20357 "HINFO IN 1331876287313085191.4781384439551221230. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.05676906s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1842266786]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 15:24:06.068) (total time: 10001ms):
	Trace[1842266786]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (15:24:16.069)
	Trace[1842266786]: [10.001035599s] [10.001035599s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[275836624]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 15:24:06.069) (total time: 10000ms):
	Trace[275836624]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (15:24:16.070)
	Trace[275836624]: [10.000623065s] [10.000623065s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1371882890]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 15:24:06.069) (total time: 10000ms):
	Trace[1371882890]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (15:24:16.070)
	Trace[1371882890]: [10.000932682s] [10.000932682s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47766->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47766->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47796->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47796->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47782->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:47782->10.96.0.1:443: read: connection reset by peer
	
	
	==> coredns [c94cbaf412489c9f15fea5d0b062c484ad3101c587a53e54935e0ed3f8af7bae] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-243834
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-243834
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743
	                    minikube.k8s.io/name=pause-243834
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T15_23_30_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 15:23:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-243834
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 15:24:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 15:24:27 +0000   Mon, 27 Jan 2025 15:23:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 15:24:27 +0000   Mon, 27 Jan 2025 15:23:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 15:24:27 +0000   Mon, 27 Jan 2025 15:23:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 15:24:27 +0000   Mon, 27 Jan 2025 15:23:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.18
	  Hostname:    pause-243834
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 b52e2c61abcc4d77b8d17f099e23e0c6
	  System UUID:                b52e2c61-abcc-4d77-b8d1-7f099e23e0c6
	  Boot ID:                    721cc1d7-4efb-43c0-b15e-d756f436fb30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-2sw96                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     77s
	  kube-system                 etcd-pause-243834                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         81s
	  kube-system                 kube-apiserver-pause-243834             250m (12%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-pause-243834    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-68s7d                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-pause-243834             100m (5%)     0 (0%)      0 (0%)           0 (0%)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 74s                kube-proxy       
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 82s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     81s                kubelet          Node pause-243834 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  81s                kubelet          Node pause-243834 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s                kubelet          Node pause-243834 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                80s                kubelet          Node pause-243834 status is now: NodeReady
	  Normal  RegisteredNode           78s                node-controller  Node pause-243834 event: Registered Node pause-243834 in Controller
	  Normal  Starting                 28s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s (x8 over 28s)  kubelet          Node pause-243834 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x8 over 28s)  kubelet          Node pause-243834 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x7 over 28s)  kubelet          Node pause-243834 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21s                node-controller  Node pause-243834 event: Registered Node pause-243834 in Controller
	
	
	==> dmesg <==
	[  +0.198158] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.165980] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.326965] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +4.836442] systemd-fstab-generator[740]: Ignoring "noauto" option for root device
	[  +0.068415] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.324878] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +1.203249] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.376162] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.092382] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.145829] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.300704] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[ +11.766184] kauditd_printk_skb: 92 callbacks suppressed
	[ +11.265319] systemd-fstab-generator[2295]: Ignoring "noauto" option for root device
	[  +0.082966] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.072948] systemd-fstab-generator[2307]: Ignoring "noauto" option for root device
	[  +0.198221] systemd-fstab-generator[2321]: Ignoring "noauto" option for root device
	[  +0.154819] systemd-fstab-generator[2333]: Ignoring "noauto" option for root device
	[  +0.343108] systemd-fstab-generator[2361]: Ignoring "noauto" option for root device
	[  +1.266524] systemd-fstab-generator[2574]: Ignoring "noauto" option for root device
	[Jan27 15:24] kauditd_printk_skb: 180 callbacks suppressed
	[  +5.286409] kauditd_printk_skb: 7 callbacks suppressed
	[ +12.320898] systemd-fstab-generator[3347]: Ignoring "noauto" option for root device
	[  +0.082183] kauditd_printk_skb: 1 callbacks suppressed
	[  +7.322541] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.904553] systemd-fstab-generator[3714]: Ignoring "noauto" option for root device
	
	
	==> etcd [1dc7e6d4ce65dc3af5ab9b5256f985f7b59d0dd78898031f523b9c38eae8998a] <==
	{"level":"info","ts":"2025-01-27T15:24:24.689944Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.72.18:2380"}
	{"level":"info","ts":"2025-01-27T15:24:25.971247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b54c677330ae1f4 is starting a new election at term 2"}
	{"level":"info","ts":"2025-01-27T15:24:25.971358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b54c677330ae1f4 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-27T15:24:25.971388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b54c677330ae1f4 received MsgPreVoteResp from 4b54c677330ae1f4 at term 2"}
	{"level":"info","ts":"2025-01-27T15:24:25.971411Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b54c677330ae1f4 became candidate at term 3"}
	{"level":"info","ts":"2025-01-27T15:24:25.971440Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b54c677330ae1f4 received MsgVoteResp from 4b54c677330ae1f4 at term 3"}
	{"level":"info","ts":"2025-01-27T15:24:25.971459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b54c677330ae1f4 became leader at term 3"}
	{"level":"info","ts":"2025-01-27T15:24:25.971477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4b54c677330ae1f4 elected leader 4b54c677330ae1f4 at term 3"}
	{"level":"info","ts":"2025-01-27T15:24:25.977806Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T15:24:25.978652Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T15:24:25.977765Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"4b54c677330ae1f4","local-member-attributes":"{Name:pause-243834 ClientURLs:[https://192.168.72.18:2379]}","request-path":"/0/members/4b54c677330ae1f4/attributes","cluster-id":"2f920756fb341899","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T15:24:25.979346Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T15:24:25.979627Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T15:24:25.979665Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T15:24:25.979720Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.18:2379"}
	{"level":"info","ts":"2025-01-27T15:24:25.980049Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T15:24:25.980628Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T15:24:36.688247Z","caller":"traceutil/trace.go:171","msg":"trace[1390294891] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"138.948287ms","start":"2025-01-27T15:24:36.549285Z","end":"2025-01-27T15:24:36.688233Z","steps":["trace[1390294891] 'process raft request'  (duration: 138.863581ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T15:24:37.239439Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"261.132054ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16281802003689431088 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-243834\" mod_revision:456 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-243834\" value_size:6721 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-243834\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-27T15:24:37.240107Z","caller":"traceutil/trace.go:171","msg":"trace[1895679455] linearizableReadLoop","detail":"{readStateIndex:493; appliedIndex:492; }","duration":"193.996578ms","start":"2025-01-27T15:24:37.046098Z","end":"2025-01-27T15:24:37.240095Z","steps":["trace[1895679455] 'read index received'  (duration: 24.502µs)","trace[1895679455] 'applied index is now lower than readState.Index'  (duration: 193.971121ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T15:24:37.240228Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.11737ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-243834\" limit:1 ","response":"range_response_count:1 size:5836"}
	{"level":"info","ts":"2025-01-27T15:24:37.240264Z","caller":"traceutil/trace.go:171","msg":"trace[1602746611] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-243834; range_end:; response_count:1; response_revision:457; }","duration":"194.183033ms","start":"2025-01-27T15:24:37.046074Z","end":"2025-01-27T15:24:37.240257Z","steps":["trace[1602746611] 'agreement among raft nodes before linearized reading'  (duration: 194.106466ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T15:24:37.240387Z","caller":"traceutil/trace.go:171","msg":"trace[711455383] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"535.909223ms","start":"2025-01-27T15:24:36.704472Z","end":"2025-01-27T15:24:37.240381Z","steps":["trace[711455383] 'process raft request'  (duration: 273.50864ms)","trace[711455383] 'compare'  (duration: 260.768105ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T15:24:37.240462Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T15:24:36.704450Z","time spent":"535.978886ms","remote":"127.0.0.1:46890","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6783,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-243834\" mod_revision:456 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-243834\" value_size:6721 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-243834\" > >"}
	{"level":"info","ts":"2025-01-27T15:24:37.625635Z","caller":"traceutil/trace.go:171","msg":"trace[1341642787] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"122.202932ms","start":"2025-01-27T15:24:37.503413Z","end":"2025-01-27T15:24:37.625616Z","steps":["trace[1341642787] 'process raft request'  (duration: 84.83988ms)","trace[1341642787] 'compare'  (duration: 37.05565ms)"],"step_count":2}
	
	
	==> etcd [813ee4bb11a157a86c4578d8422591b7414cbb4898620e10f3232b221d2a1c36] <==
	{"level":"info","ts":"2025-01-27T15:24:01.236128Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-01-27T15:24:01.287398Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"2f920756fb341899","local-member-id":"4b54c677330ae1f4","commit-index":403}
	{"level":"info","ts":"2025-01-27T15:24:01.288481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b54c677330ae1f4 switched to configuration voters=()"}
	{"level":"info","ts":"2025-01-27T15:24:01.289178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b54c677330ae1f4 became follower at term 2"}
	{"level":"info","ts":"2025-01-27T15:24:01.289495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 4b54c677330ae1f4 [peers: [], term: 2, commit: 403, applied: 0, lastindex: 403, lastterm: 2]"}
	{"level":"warn","ts":"2025-01-27T15:24:01.297255Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-01-27T15:24:01.335539Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":389}
	{"level":"info","ts":"2025-01-27T15:24:01.348984Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-01-27T15:24:01.378414Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"4b54c677330ae1f4","timeout":"7s"}
	{"level":"info","ts":"2025-01-27T15:24:01.378729Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"4b54c677330ae1f4"}
	{"level":"info","ts":"2025-01-27T15:24:01.378776Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"4b54c677330ae1f4","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-01-27T15:24:01.379296Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T15:24:01.406525Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-27T15:24:01.406822Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"4b54c677330ae1f4","initial-advertise-peer-urls":["https://192.168.72.18:2380"],"listen-peer-urls":["https://192.168.72.18:2380"],"advertise-client-urls":["https://192.168.72.18:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.18:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-27T15:24:01.406847Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T15:24:01.407278Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-01-27T15:24:01.416526Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.72.18:2380"}
	{"level":"info","ts":"2025-01-27T15:24:01.416547Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.72.18:2380"}
	{"level":"info","ts":"2025-01-27T15:24:01.416559Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T15:24:01.416613Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T15:24:01.416621Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T15:24:01.417010Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b54c677330ae1f4 switched to configuration voters=(5428181666148049396)"}
	{"level":"info","ts":"2025-01-27T15:24:01.417069Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2f920756fb341899","local-member-id":"4b54c677330ae1f4","added-peer-id":"4b54c677330ae1f4","added-peer-peer-urls":["https://192.168.72.18:2380"]}
	{"level":"info","ts":"2025-01-27T15:24:01.417199Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2f920756fb341899","local-member-id":"4b54c677330ae1f4","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T15:24:01.417251Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	
	
	==> kernel <==
	 15:24:51 up 1 min,  0 users,  load average: 0.93, 0.31, 0.11
	Linux pause-243834 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cf5178d1059f06d5864d26e7d03fe5b3a9a692aae40fef9ad1c678d05a85a012] <==
	I0127 15:24:27.257042       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0127 15:24:27.258048       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0127 15:24:27.268550       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 15:24:27.292592       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0127 15:24:27.303713       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 15:24:27.303775       1 policy_source.go:240] refreshing policies
	I0127 15:24:27.315259       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 15:24:27.324973       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0127 15:24:27.325090       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 15:24:27.325288       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0127 15:24:27.326792       1 aggregator.go:171] initial CRD sync complete...
	I0127 15:24:27.326830       1 autoregister_controller.go:144] Starting autoregister controller
	I0127 15:24:27.326837       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 15:24:27.326842       1 cache.go:39] Caches are synced for autoregister controller
	I0127 15:24:27.327071       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0127 15:24:27.369079       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0127 15:24:27.656542       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 15:24:28.159129       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 15:24:28.872164       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 15:24:28.921314       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 15:24:28.964458       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 15:24:28.975396       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 15:24:30.766030       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 15:24:30.916317       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 15:24:33.430133       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [f51148737687a2de3ed760a908b038c26b2e3bea7f1a4ffc43691d252c9cc1cd] <==
	I0127 15:24:01.254403       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0127 15:24:02.115041       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:02.115197       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0127 15:24:02.117990       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0127 15:24:02.128958       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 15:24:02.135754       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0127 15:24:02.138484       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0127 15:24:02.138832       1 instance.go:233] Using reconciler: lease
	W0127 15:24:02.139840       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:03.116083       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:03.116133       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:03.140563       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:04.434439       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:04.842128       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:05.002374       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:06.660755       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:07.317777       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:07.607804       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:10.765599       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:11.269148       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:11.616228       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:18.092581       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:18.189589       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:24:18.501186       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0127 15:24:22.140109       1 instance.go:226] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [457ca2d324e1d626170e1d26b5f6816656542a8251ac9df739edb03caadf29c8] <==
	
	
	==> kube-controller-manager [b14e81d6210610b8d4f43321e3855e4e2bf51896665160c41b7285d2fb072e1f] <==
	I0127 15:24:30.612742       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0127 15:24:30.612946       1 shared_informer.go:320] Caches are synced for disruption
	I0127 15:24:30.613125       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0127 15:24:30.613824       1 shared_informer.go:320] Caches are synced for GC
	I0127 15:24:30.613937       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 15:24:30.613992       1 shared_informer.go:320] Caches are synced for deployment
	I0127 15:24:30.616988       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 15:24:30.625435       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 15:24:30.632975       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 15:24:30.644192       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 15:24:30.646501       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 15:24:30.648760       1 shared_informer.go:320] Caches are synced for taint
	I0127 15:24:30.648852       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0127 15:24:30.648956       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-243834"
	I0127 15:24:30.648990       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0127 15:24:30.662656       1 shared_informer.go:320] Caches are synced for PV protection
	I0127 15:24:30.662939       1 shared_informer.go:320] Caches are synced for attach detach
	I0127 15:24:30.663031       1 shared_informer.go:320] Caches are synced for job
	I0127 15:24:30.663120       1 shared_informer.go:320] Caches are synced for daemon sets
	I0127 15:24:30.663261       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 15:24:30.669480       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 15:24:33.438183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="69.706586ms"
	I0127 15:24:33.471323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="32.996921ms"
	I0127 15:24:33.520443       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="49.015051ms"
	I0127 15:24:33.520600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="86.178µs"
	
	
	==> kube-proxy [402e01205daccc10373226dbbb2f7f33af08c430590dd765f6de941bdd00cbd6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 15:23:36.949468       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 15:23:36.980449       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.18"]
	E0127 15:23:36.981088       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 15:23:37.031562       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 15:23:37.031681       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 15:23:37.031745       1 server_linux.go:170] "Using iptables Proxier"
	I0127 15:23:37.035378       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 15:23:37.036159       1 server.go:497] "Version info" version="v1.32.1"
	I0127 15:23:37.036191       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 15:23:37.041555       1 config.go:199] "Starting service config controller"
	I0127 15:23:37.047269       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 15:23:37.047357       1 config.go:329] "Starting node config controller"
	I0127 15:23:37.047369       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 15:23:37.052036       1 config.go:105] "Starting endpoint slice config controller"
	I0127 15:23:37.052154       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 15:23:37.152699       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 15:23:37.152697       1 shared_informer.go:320] Caches are synced for node config
	I0127 15:23:37.152766       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [cdec3e290051cd9cd63ef734ecc0a62fed937561b40b47da5402d5223fa140d2] <==
	 >
	E0127 15:24:11.299591       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 15:24:21.303022       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-243834\": net/http: TLS handshake timeout"
	E0127 15:24:22.375044       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-243834\": dial tcp 192.168.72.18:8443: connect: connection refused"
	I0127 15:24:27.351565       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.18"]
	E0127 15:24:27.351669       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 15:24:27.440218       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 15:24:27.440325       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 15:24:27.440370       1 server_linux.go:170] "Using iptables Proxier"
	I0127 15:24:27.447863       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 15:24:27.448301       1 server.go:497] "Version info" version="v1.32.1"
	I0127 15:24:27.448502       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 15:24:27.450347       1 config.go:199] "Starting service config controller"
	I0127 15:24:27.457068       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 15:24:27.455626       1 config.go:105] "Starting endpoint slice config controller"
	I0127 15:24:27.457177       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 15:24:27.456365       1 config.go:329] "Starting node config controller"
	I0127 15:24:27.457185       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 15:24:27.557850       1 shared_informer.go:320] Caches are synced for node config
	I0127 15:24:27.557992       1 shared_informer.go:320] Caches are synced for service config
	I0127 15:24:27.558005       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5273e489a267e690f9f66f6a9d37d011bc370eb9edf6c3b5261bb22fd9d98812] <==
	I0127 15:24:25.308555       1 serving.go:386] Generated self-signed cert in-memory
	W0127 15:24:27.210252       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0127 15:24:27.210702       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 15:24:27.210773       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 15:24:27.210804       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 15:24:27.303172       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 15:24:27.303212       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 15:24:27.311232       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 15:24:27.311377       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 15:24:27.311411       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 15:24:27.311430       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 15:24:27.413040       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8b4541fde6a62bdf6bb80c3d9a522cbe540ab288daf14ab74f7cce7dadf5ea27] <==
	I0127 15:24:02.269551       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Jan 27 15:24:26 pause-243834 kubelet[3354]: E0127 15:24:26.771216    3354 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-243834\" not found" node="pause-243834"
	Jan 27 15:24:26 pause-243834 kubelet[3354]: E0127 15:24:26.771475    3354 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-243834\" not found" node="pause-243834"
	Jan 27 15:24:26 pause-243834 kubelet[3354]: E0127 15:24:26.771678    3354 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-243834\" not found" node="pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.415810    3354 kubelet_node_status.go:125] "Node was previously registered" node="pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.416145    3354 kubelet_node_status.go:79] "Successfully registered node" node="pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.416245    3354 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.418501    3354 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.424159    3354 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: E0127 15:24:27.479168    3354 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-243834\" already exists" pod="kube-system/etcd-pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.479258    3354 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: E0127 15:24:27.495105    3354 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-243834\" already exists" pod="kube-system/kube-apiserver-pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.495237    3354 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: E0127 15:24:27.511329    3354 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-243834\" already exists" pod="kube-system/kube-controller-manager-pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.511429    3354 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: E0127 15:24:27.527340    3354 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-243834\" already exists" pod="kube-system/kube-scheduler-pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.599048    3354 apiserver.go:52] "Watching apiserver"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.624248    3354 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.652590    3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fd80540-1ea3-4591-8ddc-68031a7950f5-xtables-lock\") pod \"kube-proxy-68s7d\" (UID: \"6fd80540-1ea3-4591-8ddc-68031a7950f5\") " pod="kube-system/kube-proxy-68s7d"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.653057    3354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fd80540-1ea3-4591-8ddc-68031a7950f5-lib-modules\") pod \"kube-proxy-68s7d\" (UID: \"6fd80540-1ea3-4591-8ddc-68031a7950f5\") " pod="kube-system/kube-proxy-68s7d"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: I0127 15:24:27.774723    3354 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-243834"
	Jan 27 15:24:27 pause-243834 kubelet[3354]: E0127 15:24:27.784187    3354 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-243834\" already exists" pod="kube-system/kube-apiserver-pause-243834"
	Jan 27 15:24:33 pause-243834 kubelet[3354]: E0127 15:24:33.776855    3354 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991473776014740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 15:24:33 pause-243834 kubelet[3354]: E0127 15:24:33.777412    3354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991473776014740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 15:24:43 pause-243834 kubelet[3354]: E0127 15:24:43.779775    3354 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991483779564135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 15:24:43 pause-243834 kubelet[3354]: E0127 15:24:43.779799    3354 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737991483779564135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-243834 -n pause-243834
helpers_test.go:261: (dbg) Run:  kubectl --context pause-243834 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (61.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (290.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-405706 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-405706 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m49.78765698s)

                                                
                                                
-- stdout --
	* [old-k8s-version-405706] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20321
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-405706" primary control-plane node in "old-k8s-version-405706" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 15:31:11.464681 1068488 out.go:345] Setting OutFile to fd 1 ...
	I0127 15:31:11.464969 1068488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:31:11.464986 1068488 out.go:358] Setting ErrFile to fd 2...
	I0127 15:31:11.464992 1068488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:31:11.465307 1068488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 15:31:11.466219 1068488 out.go:352] Setting JSON to false
	I0127 15:31:11.468319 1068488 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":22418,"bootTime":1737969453,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 15:31:11.468464 1068488 start.go:139] virtualization: kvm guest
	I0127 15:31:11.471052 1068488 out.go:177] * [old-k8s-version-405706] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 15:31:11.472809 1068488 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 15:31:11.472986 1068488 notify.go:220] Checking for updates...
	I0127 15:31:11.475866 1068488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 15:31:11.477488 1068488 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:31:11.478926 1068488 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:31:11.480321 1068488 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 15:31:11.482025 1068488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 15:31:11.484164 1068488 config.go:182] Loaded profile config "bridge-230388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:31:11.484396 1068488 config.go:182] Loaded profile config "enable-default-cni-230388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:31:11.484550 1068488 config.go:182] Loaded profile config "flannel-230388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:31:11.484775 1068488 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 15:31:11.531141 1068488 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 15:31:11.532545 1068488 start.go:297] selected driver: kvm2
	I0127 15:31:11.532568 1068488 start.go:901] validating driver "kvm2" against <nil>
	I0127 15:31:11.532585 1068488 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 15:31:11.533688 1068488 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:31:11.533806 1068488 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 15:31:11.551680 1068488 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 15:31:11.551777 1068488 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 15:31:11.552156 1068488 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 15:31:11.552204 1068488 cni.go:84] Creating CNI manager for ""
	I0127 15:31:11.552282 1068488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:31:11.552297 1068488 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 15:31:11.552368 1068488 start.go:340] cluster config:
	{Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:31:11.552512 1068488 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:31:11.555232 1068488 out.go:177] * Starting "old-k8s-version-405706" primary control-plane node in "old-k8s-version-405706" cluster
	I0127 15:31:11.556799 1068488 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 15:31:11.556850 1068488 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 15:31:11.556865 1068488 cache.go:56] Caching tarball of preloaded images
	I0127 15:31:11.556974 1068488 preload.go:172] Found /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 15:31:11.556988 1068488 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 15:31:11.557117 1068488 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/config.json ...
	I0127 15:31:11.557146 1068488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/config.json: {Name:mk02af48a296bed7dcfaaac24e9a5197e65ef07e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:31:11.557331 1068488 start.go:360] acquireMachinesLock for old-k8s-version-405706: {Name:mk884e36253ca066a698970989f20649e5f9cbef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 15:31:28.995441 1068488 start.go:364] duration metric: took 17.438050275s to acquireMachinesLock for "old-k8s-version-405706"
	I0127 15:31:28.995501 1068488 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-405706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 15:31:28.995648 1068488 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 15:31:28.997678 1068488 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 15:31:28.997891 1068488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:31:28.997944 1068488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:31:29.016014 1068488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0127 15:31:29.016882 1068488 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:31:29.017584 1068488 main.go:141] libmachine: Using API Version  1
	I0127 15:31:29.017605 1068488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:31:29.018025 1068488 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:31:29.018248 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetMachineName
	I0127 15:31:29.018394 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:31:29.018526 1068488 start.go:159] libmachine.API.Create for "old-k8s-version-405706" (driver="kvm2")
	I0127 15:31:29.018555 1068488 client.go:168] LocalClient.Create starting
	I0127 15:31:29.018591 1068488 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem
	I0127 15:31:29.018628 1068488 main.go:141] libmachine: Decoding PEM data...
	I0127 15:31:29.018645 1068488 main.go:141] libmachine: Parsing certificate...
	I0127 15:31:29.018711 1068488 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem
	I0127 15:31:29.018739 1068488 main.go:141] libmachine: Decoding PEM data...
	I0127 15:31:29.018753 1068488 main.go:141] libmachine: Parsing certificate...
	I0127 15:31:29.018774 1068488 main.go:141] libmachine: Running pre-create checks...
	I0127 15:31:29.018847 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .PreCreateCheck
	I0127 15:31:29.019548 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetConfigRaw
	I0127 15:31:29.020178 1068488 main.go:141] libmachine: Creating machine...
	I0127 15:31:29.020197 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .Create
	I0127 15:31:29.020400 1068488 main.go:141] libmachine: (old-k8s-version-405706) creating KVM machine...
	I0127 15:31:29.020479 1068488 main.go:141] libmachine: (old-k8s-version-405706) creating network...
	I0127 15:31:29.022165 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | found existing default KVM network
	I0127 15:31:29.024447 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:29.024271 1068991 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:95:67:a9} reservation:<nil>}
	I0127 15:31:29.025589 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:29.025487 1068991 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:e2:fe:ba} reservation:<nil>}
	I0127 15:31:29.026955 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:29.026875 1068991 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:97:6f:95} reservation:<nil>}
	I0127 15:31:29.028512 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:29.028413 1068991 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003f0b30}
	I0127 15:31:29.028706 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | created network xml: 
	I0127 15:31:29.028721 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | <network>
	I0127 15:31:29.028733 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG |   <name>mk-old-k8s-version-405706</name>
	I0127 15:31:29.028742 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG |   <dns enable='no'/>
	I0127 15:31:29.028752 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG |   
	I0127 15:31:29.028773 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0127 15:31:29.028783 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG |     <dhcp>
	I0127 15:31:29.028791 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0127 15:31:29.028800 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG |     </dhcp>
	I0127 15:31:29.028808 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG |   </ip>
	I0127 15:31:29.028816 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG |   
	I0127 15:31:29.028824 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | </network>
	I0127 15:31:29.028834 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | 
	I0127 15:31:29.035550 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | trying to create private KVM network mk-old-k8s-version-405706 192.168.72.0/24...
	I0127 15:31:29.128159 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | private KVM network mk-old-k8s-version-405706 192.168.72.0/24 created
	I0127 15:31:29.128193 1068488 main.go:141] libmachine: (old-k8s-version-405706) setting up store path in /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706 ...
	I0127 15:31:29.128207 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:29.128119 1068991 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:31:29.128246 1068488 main.go:141] libmachine: (old-k8s-version-405706) building disk image from file:///home/jenkins/minikube-integration/20321-1005652/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 15:31:29.128263 1068488 main.go:141] libmachine: (old-k8s-version-405706) Downloading /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20321-1005652/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 15:31:29.452194 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:29.452089 1068991 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa...
	I0127 15:31:29.760996 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:29.760841 1068991 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/old-k8s-version-405706.rawdisk...
	I0127 15:31:29.761146 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | Writing magic tar header
	I0127 15:31:29.761339 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | Writing SSH key tar header
	I0127 15:31:29.761478 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:29.761389 1068991 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706 ...
	I0127 15:31:29.761712 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706
	I0127 15:31:29.761734 1068488 main.go:141] libmachine: (old-k8s-version-405706) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706 (perms=drwx------)
	I0127 15:31:29.761745 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines
	I0127 15:31:29.761769 1068488 main.go:141] libmachine: (old-k8s-version-405706) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines (perms=drwxr-xr-x)
	I0127 15:31:29.761800 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:31:29.761883 1068488 main.go:141] libmachine: (old-k8s-version-405706) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube (perms=drwxr-xr-x)
	I0127 15:31:29.761952 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652
	I0127 15:31:29.761970 1068488 main.go:141] libmachine: (old-k8s-version-405706) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652 (perms=drwxrwxr-x)
	I0127 15:31:29.762014 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 15:31:29.762071 1068488 main.go:141] libmachine: (old-k8s-version-405706) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 15:31:29.762141 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | checking permissions on dir: /home/jenkins
	I0127 15:31:29.762191 1068488 main.go:141] libmachine: (old-k8s-version-405706) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 15:31:29.762246 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | checking permissions on dir: /home
	I0127 15:31:29.762294 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | skipping /home - not owner
	I0127 15:31:29.762341 1068488 main.go:141] libmachine: (old-k8s-version-405706) creating domain...
	I0127 15:31:29.764498 1068488 main.go:141] libmachine: (old-k8s-version-405706) define libvirt domain using xml: 
	I0127 15:31:29.764517 1068488 main.go:141] libmachine: (old-k8s-version-405706) <domain type='kvm'>
	I0127 15:31:29.764538 1068488 main.go:141] libmachine: (old-k8s-version-405706)   <name>old-k8s-version-405706</name>
	I0127 15:31:29.764549 1068488 main.go:141] libmachine: (old-k8s-version-405706)   <memory unit='MiB'>2200</memory>
	I0127 15:31:29.764560 1068488 main.go:141] libmachine: (old-k8s-version-405706)   <vcpu>2</vcpu>
	I0127 15:31:29.764572 1068488 main.go:141] libmachine: (old-k8s-version-405706)   <features>
	I0127 15:31:29.764585 1068488 main.go:141] libmachine: (old-k8s-version-405706)     <acpi/>
	I0127 15:31:29.764592 1068488 main.go:141] libmachine: (old-k8s-version-405706)     <apic/>
	I0127 15:31:29.764604 1068488 main.go:141] libmachine: (old-k8s-version-405706)     <pae/>
	I0127 15:31:29.764610 1068488 main.go:141] libmachine: (old-k8s-version-405706)     
	I0127 15:31:29.764618 1068488 main.go:141] libmachine: (old-k8s-version-405706)   </features>
	I0127 15:31:29.764630 1068488 main.go:141] libmachine: (old-k8s-version-405706)   <cpu mode='host-passthrough'>
	I0127 15:31:29.764638 1068488 main.go:141] libmachine: (old-k8s-version-405706)   
	I0127 15:31:29.764647 1068488 main.go:141] libmachine: (old-k8s-version-405706)   </cpu>
	I0127 15:31:29.764655 1068488 main.go:141] libmachine: (old-k8s-version-405706)   <os>
	I0127 15:31:29.764665 1068488 main.go:141] libmachine: (old-k8s-version-405706)     <type>hvm</type>
	I0127 15:31:29.764673 1068488 main.go:141] libmachine: (old-k8s-version-405706)     <boot dev='cdrom'/>
	I0127 15:31:29.764683 1068488 main.go:141] libmachine: (old-k8s-version-405706)     <boot dev='hd'/>
	I0127 15:31:29.764689 1068488 main.go:141] libmachine: (old-k8s-version-405706)     <bootmenu enable='no'/>
	I0127 15:31:29.764694 1068488 main.go:141] libmachine: (old-k8s-version-405706)   </os>
	I0127 15:31:29.764701 1068488 main.go:141] libmachine: (old-k8s-version-405706)   <devices>
	I0127 15:31:29.764711 1068488 main.go:141] libmachine: (old-k8s-version-405706)     <disk type='file' device='cdrom'>
	I0127 15:31:29.764726 1068488 main.go:141] libmachine: (old-k8s-version-405706)       <source file='/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/boot2docker.iso'/>
	I0127 15:31:29.764736 1068488 main.go:141] libmachine: (old-k8s-version-405706)       <target dev='hdc' bus='scsi'/>
	I0127 15:31:29.764746 1068488 main.go:141] libmachine: (old-k8s-version-405706)       <readonly/>
	I0127 15:31:29.764756 1068488 main.go:141] libmachine: (old-k8s-version-405706)     </disk>
	I0127 15:31:29.764766 1068488 main.go:141] libmachine: (old-k8s-version-405706)     <disk type='file' device='disk'>
	I0127 15:31:29.764778 1068488 main.go:141] libmachine: (old-k8s-version-405706)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 15:31:29.764794 1068488 main.go:141] libmachine: (old-k8s-version-405706)       <source file='/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/old-k8s-version-405706.rawdisk'/>
	I0127 15:31:29.764805 1068488 main.go:141] libmachine: (old-k8s-version-405706)       <target dev='hda' bus='virtio'/>
	I0127 15:31:29.764816 1068488 main.go:141] libmachine: (old-k8s-version-405706)     </disk>
	I0127 15:31:29.764823 1068488 main.go:141] libmachine: (old-k8s-version-405706)     <interface type='network'>
	I0127 15:31:29.764833 1068488 main.go:141] libmachine: (old-k8s-version-405706)       <source network='mk-old-k8s-version-405706'/>
	I0127 15:31:29.764844 1068488 main.go:141] libmachine: (old-k8s-version-405706)       <model type='virtio'/>
	I0127 15:31:29.764852 1068488 main.go:141] libmachine: (old-k8s-version-405706)     </interface>
	I0127 15:31:29.764866 1068488 main.go:141] libmachine: (old-k8s-version-405706)     <interface type='network'>
	I0127 15:31:29.764874 1068488 main.go:141] libmachine: (old-k8s-version-405706)       <source network='default'/>
	I0127 15:31:29.764878 1068488 main.go:141] libmachine: (old-k8s-version-405706)       <model type='virtio'/>
	I0127 15:31:29.764886 1068488 main.go:141] libmachine: (old-k8s-version-405706)     </interface>
	I0127 15:31:29.764890 1068488 main.go:141] libmachine: (old-k8s-version-405706)     <serial type='pty'>
	I0127 15:31:29.764897 1068488 main.go:141] libmachine: (old-k8s-version-405706)       <target port='0'/>
	I0127 15:31:29.764901 1068488 main.go:141] libmachine: (old-k8s-version-405706)     </serial>
	I0127 15:31:29.764908 1068488 main.go:141] libmachine: (old-k8s-version-405706)     <console type='pty'>
	I0127 15:31:29.764915 1068488 main.go:141] libmachine: (old-k8s-version-405706)       <target type='serial' port='0'/>
	I0127 15:31:29.764920 1068488 main.go:141] libmachine: (old-k8s-version-405706)     </console>
	I0127 15:31:29.764929 1068488 main.go:141] libmachine: (old-k8s-version-405706)     <rng model='virtio'>
	I0127 15:31:29.764935 1068488 main.go:141] libmachine: (old-k8s-version-405706)       <backend model='random'>/dev/random</backend>
	I0127 15:31:29.764938 1068488 main.go:141] libmachine: (old-k8s-version-405706)     </rng>
	I0127 15:31:29.764943 1068488 main.go:141] libmachine: (old-k8s-version-405706)     
	I0127 15:31:29.764949 1068488 main.go:141] libmachine: (old-k8s-version-405706)     
	I0127 15:31:29.764954 1068488 main.go:141] libmachine: (old-k8s-version-405706)   </devices>
	I0127 15:31:29.764960 1068488 main.go:141] libmachine: (old-k8s-version-405706) </domain>
	I0127 15:31:29.764967 1068488 main.go:141] libmachine: (old-k8s-version-405706) 
	I0127 15:31:29.770050 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:1c:ed:b0 in network default
	I0127 15:31:29.770839 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:29.770861 1068488 main.go:141] libmachine: (old-k8s-version-405706) starting domain...
	I0127 15:31:29.770895 1068488 main.go:141] libmachine: (old-k8s-version-405706) ensuring networks are active...
	I0127 15:31:29.771798 1068488 main.go:141] libmachine: (old-k8s-version-405706) Ensuring network default is active
	I0127 15:31:29.772220 1068488 main.go:141] libmachine: (old-k8s-version-405706) Ensuring network mk-old-k8s-version-405706 is active
	I0127 15:31:29.772856 1068488 main.go:141] libmachine: (old-k8s-version-405706) getting domain XML...
	I0127 15:31:29.773868 1068488 main.go:141] libmachine: (old-k8s-version-405706) creating domain...
	I0127 15:31:31.439322 1068488 main.go:141] libmachine: (old-k8s-version-405706) waiting for IP...
	I0127 15:31:31.440290 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:31.440765 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:31:31.440816 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:31.440756 1068991 retry.go:31] will retry after 303.434567ms: waiting for domain to come up
	I0127 15:31:31.748985 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:31.749957 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:31:31.749986 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:31.749942 1068991 retry.go:31] will retry after 369.936346ms: waiting for domain to come up
	I0127 15:31:32.121823 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:32.122291 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:31:32.122370 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:32.122302 1068991 retry.go:31] will retry after 352.994659ms: waiting for domain to come up
	I0127 15:31:32.478464 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:32.479743 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:31:32.479800 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:32.479716 1068991 retry.go:31] will retry after 380.044501ms: waiting for domain to come up
	I0127 15:31:32.861290 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:32.861930 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:31:32.861963 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:32.861888 1068991 retry.go:31] will retry after 745.725276ms: waiting for domain to come up
	I0127 15:31:33.609115 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:33.609487 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:31:33.609515 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:33.609487 1068991 retry.go:31] will retry after 744.641364ms: waiting for domain to come up
	I0127 15:31:34.356328 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:34.356919 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:31:34.356952 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:34.356869 1068991 retry.go:31] will retry after 801.411861ms: waiting for domain to come up
	I0127 15:31:35.159519 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:35.160194 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:31:35.160235 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:35.160151 1068991 retry.go:31] will retry after 1.416652771s: waiting for domain to come up
	I0127 15:31:36.578340 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:36.578823 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:31:36.578894 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:36.578795 1068991 retry.go:31] will retry after 1.215712427s: waiting for domain to come up
	I0127 15:31:37.796230 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:37.796755 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:31:37.796783 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:37.796723 1068991 retry.go:31] will retry after 2.104512717s: waiting for domain to come up
	I0127 15:31:39.902740 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:39.903358 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:31:39.903392 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:39.903309 1068991 retry.go:31] will retry after 2.13337839s: waiting for domain to come up
	I0127 15:31:42.038925 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:42.039517 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:31:42.039562 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:42.039521 1068991 retry.go:31] will retry after 3.075234373s: waiting for domain to come up
	I0127 15:31:45.116236 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:45.116798 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:31:45.116890 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:45.116795 1068991 retry.go:31] will retry after 3.128667547s: waiting for domain to come up
	I0127 15:31:48.249092 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:48.249653 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:31:48.249682 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:31:48.249620 1068991 retry.go:31] will retry after 4.415522736s: waiting for domain to come up
	I0127 15:31:52.667111 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:52.667688 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has current primary IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:52.667710 1068488 main.go:141] libmachine: (old-k8s-version-405706) found domain IP: 192.168.72.49
	I0127 15:31:52.667720 1068488 main.go:141] libmachine: (old-k8s-version-405706) reserving static IP address...
	I0127 15:31:52.668060 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-405706", mac: "52:54:00:c3:d6:50", ip: "192.168.72.49"} in network mk-old-k8s-version-405706
	I0127 15:31:52.769588 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | Getting to WaitForSSH function...
	I0127 15:31:52.769619 1068488 main.go:141] libmachine: (old-k8s-version-405706) reserved static IP address 192.168.72.49 for domain old-k8s-version-405706
	I0127 15:31:52.769633 1068488 main.go:141] libmachine: (old-k8s-version-405706) waiting for SSH...
	I0127 15:31:52.773673 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:52.774187 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c3:d6:50}
	I0127 15:31:52.774225 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:52.774407 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | Using SSH client type: external
	I0127 15:31:52.774449 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | Using SSH private key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa (-rw-------)
	I0127 15:31:52.774485 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 15:31:52.774497 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | About to run SSH command:
	I0127 15:31:52.774520 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | exit 0
	I0127 15:31:52.906804 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | SSH cmd err, output: <nil>: 
	I0127 15:31:52.907171 1068488 main.go:141] libmachine: (old-k8s-version-405706) KVM machine creation complete
	I0127 15:31:52.907509 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetConfigRaw
	I0127 15:31:52.908167 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:31:52.908392 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:31:52.908639 1068488 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 15:31:52.908659 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetState
	I0127 15:31:52.910100 1068488 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 15:31:52.910116 1068488 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 15:31:52.910121 1068488 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 15:31:52.910127 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:31:52.912974 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:52.913403 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:31:52.913433 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:52.913562 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:31:52.913776 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:31:52.913968 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:31:52.914111 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:31:52.914310 1068488 main.go:141] libmachine: Using SSH client type: native
	I0127 15:31:52.914578 1068488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:31:52.914595 1068488 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 15:31:53.027912 1068488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 15:31:53.027939 1068488 main.go:141] libmachine: Detecting the provisioner...
	I0127 15:31:53.027951 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:31:53.031687 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:53.032199 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:31:53.032246 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:53.032508 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:31:53.032817 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:31:53.033053 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:31:53.033253 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:31:53.033515 1068488 main.go:141] libmachine: Using SSH client type: native
	I0127 15:31:53.033756 1068488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:31:53.033776 1068488 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 15:31:53.155017 1068488 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 15:31:53.155141 1068488 main.go:141] libmachine: found compatible host: buildroot
	I0127 15:31:53.155160 1068488 main.go:141] libmachine: Provisioning with buildroot...
	I0127 15:31:53.155172 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetMachineName
	I0127 15:31:53.155514 1068488 buildroot.go:166] provisioning hostname "old-k8s-version-405706"
	I0127 15:31:53.155563 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetMachineName
	I0127 15:31:53.155817 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:31:53.158626 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:53.158981 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:31:53.159010 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:53.159136 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:31:53.159342 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:31:53.159509 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:31:53.159673 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:31:53.159850 1068488 main.go:141] libmachine: Using SSH client type: native
	I0127 15:31:53.160059 1068488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:31:53.160077 1068488 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-405706 && echo "old-k8s-version-405706" | sudo tee /etc/hostname
	I0127 15:31:53.282800 1068488 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-405706
	
	I0127 15:31:53.282836 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:31:53.286120 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:53.286522 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:31:53.286561 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:53.286927 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:31:53.287164 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:31:53.287378 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:31:53.287560 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:31:53.287749 1068488 main.go:141] libmachine: Using SSH client type: native
	I0127 15:31:53.288008 1068488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:31:53.288035 1068488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-405706' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-405706/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-405706' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 15:31:53.404374 1068488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 15:31:53.404435 1068488 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20321-1005652/.minikube CaCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20321-1005652/.minikube}
	I0127 15:31:53.404497 1068488 buildroot.go:174] setting up certificates
	I0127 15:31:53.404514 1068488 provision.go:84] configureAuth start
	I0127 15:31:53.404529 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetMachineName
	I0127 15:31:53.404846 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:31:53.408112 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:53.408549 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:31:53.408579 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:53.408818 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:31:53.411327 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:53.411677 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:31:53.411708 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:53.411854 1068488 provision.go:143] copyHostCerts
	I0127 15:31:53.411922 1068488 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem, removing ...
	I0127 15:31:53.411944 1068488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem
	I0127 15:31:53.412003 1068488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem (1078 bytes)
	I0127 15:31:53.412091 1068488 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem, removing ...
	I0127 15:31:53.412100 1068488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem
	I0127 15:31:53.412118 1068488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem (1123 bytes)
	I0127 15:31:53.412173 1068488 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem, removing ...
	I0127 15:31:53.412181 1068488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem
	I0127 15:31:53.412203 1068488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem (1679 bytes)
	I0127 15:31:53.412260 1068488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-405706 san=[127.0.0.1 192.168.72.49 localhost minikube old-k8s-version-405706]
	I0127 15:31:54.143917 1068488 provision.go:177] copyRemoteCerts
	I0127 15:31:54.144005 1068488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 15:31:54.144050 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:31:54.147354 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:54.147771 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:31:54.147799 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:54.148030 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:31:54.148261 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:31:54.148462 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:31:54.148628 1068488 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:31:54.239315 1068488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 15:31:54.266295 1068488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 15:31:54.296904 1068488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 15:31:54.324236 1068488 provision.go:87] duration metric: took 919.702673ms to configureAuth
	I0127 15:31:54.324274 1068488 buildroot.go:189] setting minikube options for container-runtime
	I0127 15:31:54.324478 1068488 config.go:182] Loaded profile config "old-k8s-version-405706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 15:31:54.324574 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:31:54.327544 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:54.327930 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:31:54.327966 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:54.328118 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:31:54.328287 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:31:54.328487 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:31:54.328729 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:31:54.328942 1068488 main.go:141] libmachine: Using SSH client type: native
	I0127 15:31:54.329300 1068488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:31:54.329329 1068488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 15:31:54.595824 1068488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 15:31:54.595857 1068488 main.go:141] libmachine: Checking connection to Docker...
	I0127 15:31:54.595870 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetURL
	I0127 15:31:54.597472 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | using libvirt version 6000000
	I0127 15:31:54.600176 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:54.600579 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:31:54.600612 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:54.600800 1068488 main.go:141] libmachine: Docker is up and running!
	I0127 15:31:54.600814 1068488 main.go:141] libmachine: Reticulating splines...
	I0127 15:31:54.600823 1068488 client.go:171] duration metric: took 25.582259068s to LocalClient.Create
	I0127 15:31:54.600848 1068488 start.go:167] duration metric: took 25.582323385s to libmachine.API.Create "old-k8s-version-405706"
	I0127 15:31:54.600863 1068488 start.go:293] postStartSetup for "old-k8s-version-405706" (driver="kvm2")
	I0127 15:31:54.600878 1068488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 15:31:54.600902 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:31:54.601147 1068488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 15:31:54.601176 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:31:54.603534 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:54.603875 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:31:54.603901 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:54.604060 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:31:54.604284 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:31:54.604450 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:31:54.604623 1068488 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:31:54.696013 1068488 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 15:31:54.702224 1068488 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 15:31:54.702259 1068488 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/addons for local assets ...
	I0127 15:31:54.702351 1068488 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/files for local assets ...
	I0127 15:31:54.702472 1068488 filesync.go:149] local asset: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem -> 10128162.pem in /etc/ssl/certs
	I0127 15:31:54.702600 1068488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 15:31:54.715152 1068488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:31:54.740446 1068488 start.go:296] duration metric: took 139.562096ms for postStartSetup
	I0127 15:31:54.740519 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetConfigRaw
	I0127 15:31:54.741268 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:31:54.744176 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:54.744561 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:31:54.744594 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:54.744876 1068488 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/config.json ...
	I0127 15:31:54.745107 1068488 start.go:128] duration metric: took 25.749444412s to createHost
	I0127 15:31:54.745136 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:31:54.747487 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:54.747825 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:31:54.747854 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:54.747975 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:31:54.748182 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:31:54.748308 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:31:54.748456 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:31:54.748602 1068488 main.go:141] libmachine: Using SSH client type: native
	I0127 15:31:54.748780 1068488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:31:54.748792 1068488 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 15:31:54.862602 1068488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737991914.809674967
	
	I0127 15:31:54.862631 1068488 fix.go:216] guest clock: 1737991914.809674967
	I0127 15:31:54.862641 1068488 fix.go:229] Guest: 2025-01-27 15:31:54.809674967 +0000 UTC Remote: 2025-01-27 15:31:54.745122648 +0000 UTC m=+43.344900664 (delta=64.552319ms)
	I0127 15:31:54.862694 1068488 fix.go:200] guest clock delta is within tolerance: 64.552319ms
	I0127 15:31:54.862705 1068488 start.go:83] releasing machines lock for "old-k8s-version-405706", held for 25.86723713s
	I0127 15:31:54.862748 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:31:54.863071 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:31:54.866546 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:54.866987 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:31:54.867008 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:54.867333 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:31:54.867885 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:31:54.868096 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:31:54.868207 1068488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 15:31:54.868279 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:31:54.868621 1068488 ssh_runner.go:195] Run: cat /version.json
	I0127 15:31:54.868683 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:31:54.880243 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:54.880450 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:54.880655 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:31:54.880700 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:54.880811 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:31:54.880846 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:54.880921 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:31:54.881095 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:31:54.881172 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:31:54.881320 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:31:54.881322 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:31:54.881523 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:31:54.881511 1068488 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:31:54.881718 1068488 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:31:54.997060 1068488 ssh_runner.go:195] Run: systemctl --version
	I0127 15:31:55.005507 1068488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 15:31:55.176817 1068488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 15:31:55.185365 1068488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 15:31:55.185443 1068488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 15:31:55.204999 1068488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 15:31:55.205054 1068488 start.go:495] detecting cgroup driver to use...
	I0127 15:31:55.205147 1068488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 15:31:55.226523 1068488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 15:31:55.246165 1068488 docker.go:217] disabling cri-docker service (if available) ...
	I0127 15:31:55.246237 1068488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 15:31:55.265498 1068488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 15:31:55.284594 1068488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 15:31:55.448004 1068488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 15:31:55.616996 1068488 docker.go:233] disabling docker service ...
	I0127 15:31:55.617113 1068488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 15:31:55.636529 1068488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 15:31:55.660414 1068488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 15:31:55.842747 1068488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 15:31:55.995073 1068488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 15:31:56.012871 1068488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 15:31:56.037165 1068488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 15:31:56.037245 1068488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:31:56.048668 1068488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 15:31:56.048751 1068488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:31:56.060337 1068488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:31:56.072085 1068488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:31:56.085703 1068488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 15:31:56.100998 1068488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 15:31:56.116142 1068488 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 15:31:56.116220 1068488 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 15:31:56.154398 1068488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 15:31:56.169666 1068488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:31:56.312173 1068488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 15:31:56.757160 1068488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 15:31:56.757253 1068488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 15:31:56.763286 1068488 start.go:563] Will wait 60s for crictl version
	I0127 15:31:56.763360 1068488 ssh_runner.go:195] Run: which crictl
	I0127 15:31:56.768568 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 15:31:56.818562 1068488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 15:31:56.818688 1068488 ssh_runner.go:195] Run: crio --version
	I0127 15:31:56.854118 1068488 ssh_runner.go:195] Run: crio --version
	I0127 15:31:56.890314 1068488 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 15:31:56.891572 1068488 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:31:56.895244 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:56.895766 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:31:56.895798 1068488 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:31:56.896218 1068488 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 15:31:56.900921 1068488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:31:56.916354 1068488 kubeadm.go:883] updating cluster {Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 15:31:56.916520 1068488 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 15:31:56.916594 1068488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:31:56.957570 1068488 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 15:31:56.957660 1068488 ssh_runner.go:195] Run: which lz4
	I0127 15:31:56.962297 1068488 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 15:31:56.966986 1068488 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 15:31:56.967024 1068488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 15:31:58.757895 1068488 crio.go:462] duration metric: took 1.795630336s to copy over tarball
	I0127 15:31:58.758006 1068488 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 15:32:01.484600 1068488 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.726551001s)
	I0127 15:32:01.484650 1068488 crio.go:469] duration metric: took 2.72671774s to extract the tarball
	I0127 15:32:01.484662 1068488 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 15:32:01.528722 1068488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:32:01.585532 1068488 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 15:32:01.585564 1068488 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 15:32:01.585637 1068488 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:32:01.585673 1068488 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:32:01.585692 1068488 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:32:01.585708 1068488 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 15:32:01.585715 1068488 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 15:32:01.585638 1068488 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:32:01.585729 1068488 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:32:01.585737 1068488 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 15:32:01.587215 1068488 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 15:32:01.587219 1068488 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 15:32:01.587232 1068488 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:32:01.587237 1068488 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:32:01.587252 1068488 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:32:01.587232 1068488 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:32:01.587277 1068488 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:32:01.587376 1068488 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 15:32:01.760578 1068488 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:32:01.777364 1068488 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 15:32:01.785880 1068488 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 15:32:01.797600 1068488 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:32:01.810711 1068488 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 15:32:01.811465 1068488 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:32:01.824292 1068488 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:32:01.829352 1068488 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 15:32:01.829432 1068488 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:32:01.829487 1068488 ssh_runner.go:195] Run: which crictl
	I0127 15:32:01.923910 1068488 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 15:32:01.923971 1068488 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 15:32:01.924025 1068488 ssh_runner.go:195] Run: which crictl
	I0127 15:32:01.939576 1068488 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 15:32:01.939633 1068488 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 15:32:01.939669 1068488 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:32:01.939723 1068488 ssh_runner.go:195] Run: which crictl
	I0127 15:32:01.939639 1068488 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 15:32:01.939792 1068488 ssh_runner.go:195] Run: which crictl
	I0127 15:32:01.984164 1068488 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 15:32:01.984222 1068488 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 15:32:01.984275 1068488 ssh_runner.go:195] Run: which crictl
	I0127 15:32:01.985398 1068488 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 15:32:01.985446 1068488 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:32:01.985452 1068488 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 15:32:01.985495 1068488 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:32:01.985500 1068488 ssh_runner.go:195] Run: which crictl
	I0127 15:32:01.985537 1068488 ssh_runner.go:195] Run: which crictl
	I0127 15:32:01.985543 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:32:01.985592 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 15:32:01.985603 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 15:32:01.985654 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:32:01.989030 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 15:32:02.064404 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:32:02.110244 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:32:02.110311 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 15:32:02.110383 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:32:02.115255 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:32:02.115376 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 15:32:02.115394 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 15:32:02.178528 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:32:02.189199 1068488 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:32:02.304204 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:32:02.304270 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 15:32:02.304367 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:32:02.329433 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:32:02.329506 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 15:32:02.352263 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 15:32:02.352276 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:32:02.546782 1068488 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 15:32:02.546862 1068488 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:32:02.546869 1068488 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 15:32:02.546992 1068488 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 15:32:02.547029 1068488 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 15:32:02.547103 1068488 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 15:32:02.547134 1068488 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 15:32:02.583993 1068488 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 15:32:02.584064 1068488 cache_images.go:92] duration metric: took 998.482949ms to LoadCachedImages
	W0127 15:32:02.584144 1068488 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0127 15:32:02.584163 1068488 kubeadm.go:934] updating node { 192.168.72.49 8443 v1.20.0 crio true true} ...
	I0127 15:32:02.584285 1068488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-405706 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 15:32:02.584377 1068488 ssh_runner.go:195] Run: crio config
	I0127 15:32:02.633334 1068488 cni.go:84] Creating CNI manager for ""
	I0127 15:32:02.633367 1068488 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:32:02.633382 1068488 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 15:32:02.633411 1068488 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.49 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-405706 NodeName:old-k8s-version-405706 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 15:32:02.633617 1068488 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-405706"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 15:32:02.633690 1068488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 15:32:02.645546 1068488 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 15:32:02.645623 1068488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 15:32:02.655951 1068488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0127 15:32:02.674720 1068488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 15:32:02.696536 1068488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0127 15:32:02.715482 1068488 ssh_runner.go:195] Run: grep 192.168.72.49	control-plane.minikube.internal$ /etc/hosts
	I0127 15:32:02.719880 1068488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:32:02.735739 1068488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:32:02.874529 1068488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:32:02.897491 1068488 certs.go:68] Setting up /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706 for IP: 192.168.72.49
	I0127 15:32:02.897521 1068488 certs.go:194] generating shared ca certs ...
	I0127 15:32:02.897545 1068488 certs.go:226] acquiring lock for ca certs: {Name:mk0f815247e4bef2bde3ddcae95639d1cd9cab24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:32:02.897730 1068488 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key
	I0127 15:32:02.897783 1068488 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key
	I0127 15:32:02.897796 1068488 certs.go:256] generating profile certs ...
	I0127 15:32:02.897920 1068488 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.key
	I0127 15:32:02.897963 1068488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.crt with IP's: []
	I0127 15:32:03.255184 1068488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.crt ...
	I0127 15:32:03.255230 1068488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.crt: {Name:mk07e665948467da2f46a392d43845f87923c563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:32:03.255451 1068488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.key ...
	I0127 15:32:03.255474 1068488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.key: {Name:mk0713850b81ac9f559ebf443c5b9ebc55b0f869 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:32:03.255614 1068488 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.key.8816e362
	I0127 15:32:03.255642 1068488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.crt.8816e362 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.49]
	I0127 15:32:03.473732 1068488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.crt.8816e362 ...
	I0127 15:32:03.473776 1068488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.crt.8816e362: {Name:mk99ed8fed5f8937175b6daaddbfac3818f20b12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:32:03.473980 1068488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.key.8816e362 ...
	I0127 15:32:03.474003 1068488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.key.8816e362: {Name:mk337b87c724a5d0d87a9f60b861f955fe2277d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:32:03.474121 1068488 certs.go:381] copying /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.crt.8816e362 -> /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.crt
	I0127 15:32:03.474241 1068488 certs.go:385] copying /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.key.8816e362 -> /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.key
	I0127 15:32:03.474330 1068488 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.key
	I0127 15:32:03.474357 1068488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.crt with IP's: []
	I0127 15:32:03.676255 1068488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.crt ...
	I0127 15:32:03.676299 1068488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.crt: {Name:mkf8c38ab1cfd17f046601a2a40e41029d6c7711 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:32:03.676501 1068488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.key ...
	I0127 15:32:03.676518 1068488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.key: {Name:mk2e555f456da2a90be16b0f9b7a1a74242d1cf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:32:03.676747 1068488 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem (1338 bytes)
	W0127 15:32:03.676799 1068488 certs.go:480] ignoring /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816_empty.pem, impossibly tiny 0 bytes
	I0127 15:32:03.676810 1068488 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 15:32:03.676844 1068488 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem (1078 bytes)
	I0127 15:32:03.676878 1068488 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem (1123 bytes)
	I0127 15:32:03.676908 1068488 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem (1679 bytes)
	I0127 15:32:03.676960 1068488 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:32:03.677775 1068488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 15:32:03.706988 1068488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 15:32:03.733684 1068488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 15:32:03.759485 1068488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 15:32:03.787421 1068488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 15:32:03.844528 1068488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 15:32:03.885195 1068488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 15:32:03.920652 1068488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 15:32:03.981214 1068488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /usr/share/ca-certificates/10128162.pem (1708 bytes)
	I0127 15:32:04.010415 1068488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 15:32:04.048487 1068488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem --> /usr/share/ca-certificates/1012816.pem (1338 bytes)
	I0127 15:32:04.081807 1068488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 15:32:04.107731 1068488 ssh_runner.go:195] Run: openssl version
	I0127 15:32:04.115417 1068488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 15:32:04.131032 1068488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:32:04.136340 1068488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 14:06 /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:32:04.136413 1068488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:32:04.143483 1068488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 15:32:04.157154 1068488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1012816.pem && ln -fs /usr/share/ca-certificates/1012816.pem /etc/ssl/certs/1012816.pem"
	I0127 15:32:04.173559 1068488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1012816.pem
	I0127 15:32:04.180076 1068488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 14:20 /usr/share/ca-certificates/1012816.pem
	I0127 15:32:04.180156 1068488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1012816.pem
	I0127 15:32:04.188523 1068488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1012816.pem /etc/ssl/certs/51391683.0"
	I0127 15:32:04.202178 1068488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10128162.pem && ln -fs /usr/share/ca-certificates/10128162.pem /etc/ssl/certs/10128162.pem"
	I0127 15:32:04.214804 1068488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10128162.pem
	I0127 15:32:04.219831 1068488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 14:20 /usr/share/ca-certificates/10128162.pem
	I0127 15:32:04.219912 1068488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10128162.pem
	I0127 15:32:04.226704 1068488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10128162.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 15:32:04.239594 1068488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 15:32:04.244955 1068488 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 15:32:04.245078 1068488 kubeadm.go:392] StartCluster: {Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:32:04.245187 1068488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 15:32:04.245248 1068488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:32:04.293496 1068488 cri.go:89] found id: ""
	I0127 15:32:04.293575 1068488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 15:32:04.305028 1068488 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:32:04.317584 1068488 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:32:04.329615 1068488 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:32:04.329641 1068488 kubeadm.go:157] found existing configuration files:
	
	I0127 15:32:04.329701 1068488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:32:04.340477 1068488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:32:04.340557 1068488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:32:04.351641 1068488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:32:04.362325 1068488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:32:04.362395 1068488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:32:04.372949 1068488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:32:04.385125 1068488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:32:04.385207 1068488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:32:04.397576 1068488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:32:04.409863 1068488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:32:04.409929 1068488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:32:04.422342 1068488 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:32:04.752584 1068488 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:34:03.149330 1068488 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 15:34:03.149483 1068488 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 15:34:03.151081 1068488 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 15:34:03.151217 1068488 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:34:03.151551 1068488 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:34:03.151812 1068488 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:34:03.152194 1068488 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 15:34:03.152365 1068488 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:34:03.154118 1068488 out.go:235]   - Generating certificates and keys ...
	I0127 15:34:03.154238 1068488 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:34:03.154313 1068488 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:34:03.154370 1068488 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 15:34:03.154421 1068488 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 15:34:03.154469 1068488 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 15:34:03.154509 1068488 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 15:34:03.154552 1068488 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 15:34:03.154654 1068488 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-405706] and IPs [192.168.72.49 127.0.0.1 ::1]
	I0127 15:34:03.154700 1068488 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 15:34:03.154809 1068488 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-405706] and IPs [192.168.72.49 127.0.0.1 ::1]
	I0127 15:34:03.154864 1068488 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 15:34:03.154917 1068488 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 15:34:03.154955 1068488 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 15:34:03.155009 1068488 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:34:03.155051 1068488 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:34:03.155097 1068488 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:34:03.155150 1068488 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:34:03.155194 1068488 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:34:03.155309 1068488 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:34:03.155438 1068488 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:34:03.155505 1068488 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:34:03.155600 1068488 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:34:03.157159 1068488 out.go:235]   - Booting up control plane ...
	I0127 15:34:03.157232 1068488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:34:03.157296 1068488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:34:03.157351 1068488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:34:03.157432 1068488 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:34:03.157599 1068488 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 15:34:03.157669 1068488 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 15:34:03.157755 1068488 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:34:03.157944 1068488 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:34:03.158042 1068488 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:34:03.158226 1068488 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:34:03.158315 1068488 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:34:03.158523 1068488 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:34:03.158592 1068488 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:34:03.158735 1068488 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:34:03.158790 1068488 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:34:03.158947 1068488 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:34:03.158963 1068488 kubeadm.go:310] 
	I0127 15:34:03.159023 1068488 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 15:34:03.159117 1068488 kubeadm.go:310] 		timed out waiting for the condition
	I0127 15:34:03.159137 1068488 kubeadm.go:310] 
	I0127 15:34:03.159173 1068488 kubeadm.go:310] 	This error is likely caused by:
	I0127 15:34:03.159202 1068488 kubeadm.go:310] 		- The kubelet is not running
	I0127 15:34:03.159287 1068488 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 15:34:03.159295 1068488 kubeadm.go:310] 
	I0127 15:34:03.159378 1068488 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 15:34:03.159411 1068488 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 15:34:03.159437 1068488 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 15:34:03.159444 1068488 kubeadm.go:310] 
	I0127 15:34:03.159529 1068488 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 15:34:03.159602 1068488 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 15:34:03.159608 1068488 kubeadm.go:310] 
	I0127 15:34:03.159696 1068488 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 15:34:03.159764 1068488 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 15:34:03.159827 1068488 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 15:34:03.159887 1068488 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 15:34:03.159938 1068488 kubeadm.go:310] 
	W0127 15:34:03.160008 1068488 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-405706] and IPs [192.168.72.49 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-405706] and IPs [192.168.72.49 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-405706] and IPs [192.168.72.49 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-405706] and IPs [192.168.72.49 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 15:34:03.160044 1068488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:34:04.021231 1068488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:34:04.035964 1068488 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:34:04.046354 1068488 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:34:04.046381 1068488 kubeadm.go:157] found existing configuration files:
	
	I0127 15:34:04.046427 1068488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:34:04.056203 1068488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:34:04.056277 1068488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:34:04.066729 1068488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:34:04.076382 1068488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:34:04.076454 1068488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:34:04.086304 1068488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:34:04.095695 1068488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:34:04.095752 1068488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:34:04.105666 1068488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:34:04.115031 1068488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:34:04.115093 1068488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:34:04.125207 1068488 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:34:04.198603 1068488 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 15:34:04.198681 1068488 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:34:04.362159 1068488 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:34:04.362300 1068488 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:34:04.362445 1068488 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 15:34:04.557069 1068488 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:34:04.559176 1068488 out.go:235]   - Generating certificates and keys ...
	I0127 15:34:04.560830 1068488 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:34:04.560958 1068488 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:34:04.561125 1068488 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:34:04.561229 1068488 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:34:04.561326 1068488 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:34:04.561403 1068488 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:34:04.561635 1068488 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:34:04.562270 1068488 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:34:04.562581 1068488 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:34:04.563601 1068488 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:34:04.563668 1068488 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:34:04.563727 1068488 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:34:04.737282 1068488 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:34:04.807784 1068488 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:34:05.037219 1068488 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:34:05.247842 1068488 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:34:05.262840 1068488 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:34:05.265456 1068488 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:34:05.265668 1068488 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:34:05.412105 1068488 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:34:05.414296 1068488 out.go:235]   - Booting up control plane ...
	I0127 15:34:05.414409 1068488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:34:05.420663 1068488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:34:05.422892 1068488 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:34:05.423759 1068488 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:34:05.426018 1068488 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 15:34:45.426191 1068488 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 15:34:45.426586 1068488 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:34:45.426809 1068488 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:34:50.427296 1068488 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:34:50.427484 1068488 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:35:00.428186 1068488 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:35:00.428439 1068488 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:35:20.429624 1068488 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:35:20.429933 1068488 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:36:00.432284 1068488 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:36:00.432611 1068488 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:36:00.432645 1068488 kubeadm.go:310] 
	I0127 15:36:00.432705 1068488 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 15:36:00.432784 1068488 kubeadm.go:310] 		timed out waiting for the condition
	I0127 15:36:00.432803 1068488 kubeadm.go:310] 
	I0127 15:36:00.432835 1068488 kubeadm.go:310] 	This error is likely caused by:
	I0127 15:36:00.432881 1068488 kubeadm.go:310] 		- The kubelet is not running
	I0127 15:36:00.432983 1068488 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 15:36:00.433026 1068488 kubeadm.go:310] 
	I0127 15:36:00.433205 1068488 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 15:36:00.433258 1068488 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 15:36:00.433290 1068488 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 15:36:00.433297 1068488 kubeadm.go:310] 
	I0127 15:36:00.433435 1068488 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 15:36:00.433561 1068488 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 15:36:00.433573 1068488 kubeadm.go:310] 
	I0127 15:36:00.433749 1068488 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 15:36:00.433881 1068488 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 15:36:00.434004 1068488 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 15:36:00.434135 1068488 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 15:36:00.434148 1068488 kubeadm.go:310] 
	I0127 15:36:00.435065 1068488 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:36:00.435196 1068488 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 15:36:00.435305 1068488 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 15:36:00.435388 1068488 kubeadm.go:394] duration metric: took 3m56.190314874s to StartCluster
	I0127 15:36:00.435456 1068488 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:36:00.435524 1068488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:36:00.493355 1068488 cri.go:89] found id: ""
	I0127 15:36:00.493393 1068488 logs.go:282] 0 containers: []
	W0127 15:36:00.493405 1068488 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:36:00.493414 1068488 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:36:00.493488 1068488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:36:00.543647 1068488 cri.go:89] found id: ""
	I0127 15:36:00.543677 1068488 logs.go:282] 0 containers: []
	W0127 15:36:00.543686 1068488 logs.go:284] No container was found matching "etcd"
	I0127 15:36:00.543692 1068488 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:36:00.543749 1068488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:36:00.582196 1068488 cri.go:89] found id: ""
	I0127 15:36:00.582226 1068488 logs.go:282] 0 containers: []
	W0127 15:36:00.582239 1068488 logs.go:284] No container was found matching "coredns"
	I0127 15:36:00.582247 1068488 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:36:00.582321 1068488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:36:00.623063 1068488 cri.go:89] found id: ""
	I0127 15:36:00.623106 1068488 logs.go:282] 0 containers: []
	W0127 15:36:00.623121 1068488 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:36:00.623131 1068488 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:36:00.623207 1068488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:36:00.681718 1068488 cri.go:89] found id: ""
	I0127 15:36:00.681748 1068488 logs.go:282] 0 containers: []
	W0127 15:36:00.681757 1068488 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:36:00.681763 1068488 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:36:00.681827 1068488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:36:00.723064 1068488 cri.go:89] found id: ""
	I0127 15:36:00.723094 1068488 logs.go:282] 0 containers: []
	W0127 15:36:00.723103 1068488 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:36:00.723111 1068488 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:36:00.723178 1068488 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:36:00.766012 1068488 cri.go:89] found id: ""
	I0127 15:36:00.766047 1068488 logs.go:282] 0 containers: []
	W0127 15:36:00.766060 1068488 logs.go:284] No container was found matching "kindnet"
	I0127 15:36:00.766074 1068488 logs.go:123] Gathering logs for kubelet ...
	I0127 15:36:00.766092 1068488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:36:00.820144 1068488 logs.go:123] Gathering logs for dmesg ...
	I0127 15:36:00.820184 1068488 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:36:00.838217 1068488 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:36:00.838244 1068488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:36:00.995013 1068488 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:36:00.995043 1068488 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:36:00.995059 1068488 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:36:01.119183 1068488 logs.go:123] Gathering logs for container status ...
	I0127 15:36:01.119242 1068488 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0127 15:36:01.164311 1068488 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 15:36:01.164390 1068488 out.go:270] * 
	* 
	W0127 15:36:01.164465 1068488 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 15:36:01.164485 1068488 out.go:270] * 
	* 
	W0127 15:36:01.165733 1068488 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 15:36:01.169677 1068488 out.go:201] 
	W0127 15:36:01.171153 1068488 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 15:36:01.171222 1068488 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 15:36:01.171252 1068488 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 15:36:01.172961 1068488 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-405706 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405706 -n old-k8s-version-405706
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405706 -n old-k8s-version-405706: exit status 6 (276.010694ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 15:36:01.502482 1075387 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-405706" does not appear in /home/jenkins/minikube-integration/20321-1005652/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-405706" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (290.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (1608.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-458006 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 15:34:52.961946 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-458006 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: signal: killed (26m46.610053495s)

                                                
                                                
-- stdout --
	* [no-preload-458006] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20321
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "no-preload-458006" primary control-plane node in "no-preload-458006" cluster
	* Restarting existing kvm2 VM for "no-preload-458006" ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-458006 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 15:34:52.414094 1074659 out.go:345] Setting OutFile to fd 1 ...
	I0127 15:34:52.414214 1074659 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:34:52.414223 1074659 out.go:358] Setting ErrFile to fd 2...
	I0127 15:34:52.414227 1074659 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:34:52.414452 1074659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 15:34:52.415010 1074659 out.go:352] Setting JSON to false
	I0127 15:34:52.416016 1074659 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":22639,"bootTime":1737969453,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 15:34:52.416135 1074659 start.go:139] virtualization: kvm guest
	I0127 15:34:52.418372 1074659 out.go:177] * [no-preload-458006] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 15:34:52.419903 1074659 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 15:34:52.419919 1074659 notify.go:220] Checking for updates...
	I0127 15:34:52.422691 1074659 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 15:34:52.424208 1074659 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:34:52.425664 1074659 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:34:52.427117 1074659 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 15:34:52.428565 1074659 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 15:34:52.430284 1074659 config.go:182] Loaded profile config "no-preload-458006": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:34:52.430672 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:34:52.430768 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:34:52.445814 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38067
	I0127 15:34:52.446372 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:34:52.446995 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:34:52.447029 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:34:52.447340 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:34:52.447528 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:34:52.447747 1074659 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 15:34:52.448045 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:34:52.448099 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:34:52.462688 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32949
	I0127 15:34:52.463154 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:34:52.463656 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:34:52.463683 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:34:52.464009 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:34:52.464220 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:34:52.499523 1074659 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 15:34:52.500986 1074659 start.go:297] selected driver: kvm2
	I0127 15:34:52.501000 1074659 start.go:901] validating driver "kvm2" against &{Name:no-preload-458006 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-458006 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.30 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:34:52.501148 1074659 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 15:34:52.501814 1074659 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:34:52.501882 1074659 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 15:34:52.517533 1074659 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 15:34:52.518021 1074659 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 15:34:52.518066 1074659 cni.go:84] Creating CNI manager for ""
	I0127 15:34:52.518135 1074659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:34:52.518191 1074659 start.go:340] cluster config:
	{Name:no-preload-458006 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-458006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.30 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:34:52.518350 1074659 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:34:52.520254 1074659 out.go:177] * Starting "no-preload-458006" primary control-plane node in "no-preload-458006" cluster
	I0127 15:34:52.521862 1074659 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 15:34:52.522029 1074659 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/no-preload-458006/config.json ...
	I0127 15:34:52.522118 1074659 cache.go:107] acquiring lock: {Name:mkab877afe00bdef492a275ba5e11b237ab949d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:34:52.522171 1074659 cache.go:107] acquiring lock: {Name:mk2d9c886c5f600f7a6d071a8c4fef4d01fd98cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:34:52.522108 1074659 cache.go:107] acquiring lock: {Name:mk7a263f087ba5b79dd695145c82a3f24029b744 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:34:52.522200 1074659 cache.go:107] acquiring lock: {Name:mk2a84a2da9dbaca8f99820c56db1e9ae62b0d4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:34:52.522150 1074659 cache.go:107] acquiring lock: {Name:mk1007e0eea5c45a940ef2803c806fd41df1ed15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:34:52.522130 1074659 cache.go:107] acquiring lock: {Name:mk17f67db36f6e3590b1f23e8880369eb1d5048d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:34:52.522257 1074659 cache.go:115] /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
	I0127 15:34:52.522262 1074659 cache.go:115] /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0127 15:34:52.522274 1074659 cache.go:115] /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
	I0127 15:34:52.522273 1074659 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 109.665µs
	I0127 15:34:52.522288 1074659 cache.go:115] /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0127 15:34:52.522286 1074659 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 162.035µs
	I0127 15:34:52.522297 1074659 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 162.722µs
	I0127 15:34:52.522306 1074659 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0127 15:34:52.522296 1074659 start.go:360] acquireMachinesLock for no-preload-458006: {Name:mk884e36253ca066a698970989f20649e5f9cbef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 15:34:52.522285 1074659 cache.go:107] acquiring lock: {Name:mk16b7af5c748f49a0915a4b2c055bd1b8f68ef2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:34:52.522328 1074659 cache.go:115] /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
	I0127 15:34:52.522341 1074659 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 243.747µs
	I0127 15:34:52.522360 1074659 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
	I0127 15:34:52.522305 1074659 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
	I0127 15:34:52.522300 1074659 cache.go:107] acquiring lock: {Name:mkd8df1f965ffe54752250be415f61a3eb79e161 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:34:52.522366 1074659 start.go:364] duration metric: took 36.894µs to acquireMachinesLock for "no-preload-458006"
	I0127 15:34:52.522408 1074659 start.go:96] Skipping create...Using existing machine configuration
	I0127 15:34:52.522420 1074659 fix.go:54] fixHost starting: 
	I0127 15:34:52.522422 1074659 cache.go:115] /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0127 15:34:52.522425 1074659 cache.go:115] /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
	I0127 15:34:52.522433 1074659 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 204.041µs
	I0127 15:34:52.522443 1074659 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0127 15:34:52.522295 1074659 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
	I0127 15:34:52.522274 1074659 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 76.604µs
	I0127 15:34:52.522465 1074659 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0127 15:34:52.522350 1074659 cache.go:115] /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0127 15:34:52.522443 1074659 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 187.078µs
	I0127 15:34:52.522482 1074659 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
	I0127 15:34:52.522477 1074659 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 378.063µs
	I0127 15:34:52.522491 1074659 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0127 15:34:52.522509 1074659 cache.go:87] Successfully saved all images to host disk.
	I0127 15:34:52.522783 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:34:52.522822 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:34:52.538040 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35457
	I0127 15:34:52.538547 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:34:52.539064 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:34:52.539085 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:34:52.539442 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:34:52.539635 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:34:52.539775 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:34:52.541621 1074659 fix.go:112] recreateIfNeeded on no-preload-458006: state=Stopped err=<nil>
	I0127 15:34:52.541651 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	W0127 15:34:52.541806 1074659 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 15:34:52.543829 1074659 out.go:177] * Restarting existing kvm2 VM for "no-preload-458006" ...
	I0127 15:34:52.545149 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Start
	I0127 15:34:52.545319 1074659 main.go:141] libmachine: (no-preload-458006) starting domain...
	I0127 15:34:52.545340 1074659 main.go:141] libmachine: (no-preload-458006) ensuring networks are active...
	I0127 15:34:52.546099 1074659 main.go:141] libmachine: (no-preload-458006) Ensuring network default is active
	I0127 15:34:52.546443 1074659 main.go:141] libmachine: (no-preload-458006) Ensuring network mk-no-preload-458006 is active
	I0127 15:34:52.546823 1074659 main.go:141] libmachine: (no-preload-458006) getting domain XML...
	I0127 15:34:52.547704 1074659 main.go:141] libmachine: (no-preload-458006) creating domain...
	I0127 15:34:53.762069 1074659 main.go:141] libmachine: (no-preload-458006) waiting for IP...
	I0127 15:34:53.762827 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:34:53.763582 1074659 main.go:141] libmachine: (no-preload-458006) DBG | unable to find current IP address of domain no-preload-458006 in network mk-no-preload-458006
	I0127 15:34:53.763668 1074659 main.go:141] libmachine: (no-preload-458006) DBG | I0127 15:34:53.763588 1074694 retry.go:31] will retry after 302.13278ms: waiting for domain to come up
	I0127 15:34:54.067390 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:34:54.068104 1074659 main.go:141] libmachine: (no-preload-458006) DBG | unable to find current IP address of domain no-preload-458006 in network mk-no-preload-458006
	I0127 15:34:54.068144 1074659 main.go:141] libmachine: (no-preload-458006) DBG | I0127 15:34:54.068068 1074694 retry.go:31] will retry after 321.892707ms: waiting for domain to come up
	I0127 15:34:54.391687 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:34:54.392267 1074659 main.go:141] libmachine: (no-preload-458006) DBG | unable to find current IP address of domain no-preload-458006 in network mk-no-preload-458006
	I0127 15:34:54.392306 1074659 main.go:141] libmachine: (no-preload-458006) DBG | I0127 15:34:54.392213 1074694 retry.go:31] will retry after 424.254977ms: waiting for domain to come up
	I0127 15:34:54.817861 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:34:54.818581 1074659 main.go:141] libmachine: (no-preload-458006) DBG | unable to find current IP address of domain no-preload-458006 in network mk-no-preload-458006
	I0127 15:34:54.818610 1074659 main.go:141] libmachine: (no-preload-458006) DBG | I0127 15:34:54.818540 1074694 retry.go:31] will retry after 569.922584ms: waiting for domain to come up
	I0127 15:34:55.390315 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:34:55.390849 1074659 main.go:141] libmachine: (no-preload-458006) DBG | unable to find current IP address of domain no-preload-458006 in network mk-no-preload-458006
	I0127 15:34:55.390871 1074659 main.go:141] libmachine: (no-preload-458006) DBG | I0127 15:34:55.390802 1074694 retry.go:31] will retry after 490.901527ms: waiting for domain to come up
	I0127 15:34:55.883879 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:34:55.884365 1074659 main.go:141] libmachine: (no-preload-458006) DBG | unable to find current IP address of domain no-preload-458006 in network mk-no-preload-458006
	I0127 15:34:55.884423 1074659 main.go:141] libmachine: (no-preload-458006) DBG | I0127 15:34:55.884341 1074694 retry.go:31] will retry after 889.954092ms: waiting for domain to come up
	I0127 15:34:56.775410 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:34:56.775925 1074659 main.go:141] libmachine: (no-preload-458006) DBG | unable to find current IP address of domain no-preload-458006 in network mk-no-preload-458006
	I0127 15:34:56.775975 1074659 main.go:141] libmachine: (no-preload-458006) DBG | I0127 15:34:56.775870 1074694 retry.go:31] will retry after 1.094094924s: waiting for domain to come up
	I0127 15:34:57.871942 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:34:57.872457 1074659 main.go:141] libmachine: (no-preload-458006) DBG | unable to find current IP address of domain no-preload-458006 in network mk-no-preload-458006
	I0127 15:34:57.872513 1074659 main.go:141] libmachine: (no-preload-458006) DBG | I0127 15:34:57.872430 1074694 retry.go:31] will retry after 1.372357525s: waiting for domain to come up
	I0127 15:34:59.246188 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:34:59.246736 1074659 main.go:141] libmachine: (no-preload-458006) DBG | unable to find current IP address of domain no-preload-458006 in network mk-no-preload-458006
	I0127 15:34:59.246761 1074659 main.go:141] libmachine: (no-preload-458006) DBG | I0127 15:34:59.246720 1074694 retry.go:31] will retry after 1.12546249s: waiting for domain to come up
	I0127 15:35:00.374135 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:00.374672 1074659 main.go:141] libmachine: (no-preload-458006) DBG | unable to find current IP address of domain no-preload-458006 in network mk-no-preload-458006
	I0127 15:35:00.374705 1074659 main.go:141] libmachine: (no-preload-458006) DBG | I0127 15:35:00.374628 1074694 retry.go:31] will retry after 1.589774378s: waiting for domain to come up
	I0127 15:35:01.966397 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:01.967060 1074659 main.go:141] libmachine: (no-preload-458006) DBG | unable to find current IP address of domain no-preload-458006 in network mk-no-preload-458006
	I0127 15:35:01.967093 1074659 main.go:141] libmachine: (no-preload-458006) DBG | I0127 15:35:01.967014 1074694 retry.go:31] will retry after 2.796743588s: waiting for domain to come up
	I0127 15:35:04.766334 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:04.766836 1074659 main.go:141] libmachine: (no-preload-458006) DBG | unable to find current IP address of domain no-preload-458006 in network mk-no-preload-458006
	I0127 15:35:04.766867 1074659 main.go:141] libmachine: (no-preload-458006) DBG | I0127 15:35:04.766791 1074694 retry.go:31] will retry after 3.490929661s: waiting for domain to come up
	I0127 15:35:08.515656 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:08.516172 1074659 main.go:141] libmachine: (no-preload-458006) DBG | unable to find current IP address of domain no-preload-458006 in network mk-no-preload-458006
	I0127 15:35:08.516205 1074659 main.go:141] libmachine: (no-preload-458006) DBG | I0127 15:35:08.516103 1074694 retry.go:31] will retry after 2.997290302s: waiting for domain to come up
	I0127 15:35:11.517469 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:11.518145 1074659 main.go:141] libmachine: (no-preload-458006) found domain IP: 192.168.50.30
	I0127 15:35:11.518172 1074659 main.go:141] libmachine: (no-preload-458006) reserving static IP address...
	I0127 15:35:11.518202 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has current primary IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:11.518743 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "no-preload-458006", mac: "52:54:00:4f:b5:94", ip: "192.168.50.30"} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:35:11.518777 1074659 main.go:141] libmachine: (no-preload-458006) DBG | skip adding static IP to network mk-no-preload-458006 - found existing host DHCP lease matching {name: "no-preload-458006", mac: "52:54:00:4f:b5:94", ip: "192.168.50.30"}
	I0127 15:35:11.518786 1074659 main.go:141] libmachine: (no-preload-458006) reserved static IP address 192.168.50.30 for domain no-preload-458006
	I0127 15:35:11.518795 1074659 main.go:141] libmachine: (no-preload-458006) waiting for SSH...
	I0127 15:35:11.518804 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Getting to WaitForSSH function...
	I0127 15:35:11.521525 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:11.521978 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:35:11.522010 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:11.522138 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Using SSH client type: external
	I0127 15:35:11.522164 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Using SSH private key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa (-rw-------)
	I0127 15:35:11.522199 1074659 main.go:141] libmachine: (no-preload-458006) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 15:35:11.522224 1074659 main.go:141] libmachine: (no-preload-458006) DBG | About to run SSH command:
	I0127 15:35:11.522234 1074659 main.go:141] libmachine: (no-preload-458006) DBG | exit 0
	I0127 15:35:11.649161 1074659 main.go:141] libmachine: (no-preload-458006) DBG | SSH cmd err, output: <nil>: 
	I0127 15:35:11.649628 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetConfigRaw
	I0127 15:35:11.650313 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetIP
	I0127 15:35:11.653025 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:11.653413 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:35:11.653442 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:11.653682 1074659 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/no-preload-458006/config.json ...
	I0127 15:35:11.653882 1074659 machine.go:93] provisionDockerMachine start ...
	I0127 15:35:11.653900 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:35:11.654133 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:35:11.656481 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:11.656816 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:35:11.656872 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:11.656969 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:35:11.657167 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:35:11.657303 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:35:11.657442 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:35:11.657567 1074659 main.go:141] libmachine: Using SSH client type: native
	I0127 15:35:11.657792 1074659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0127 15:35:11.657807 1074659 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 15:35:11.765760 1074659 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 15:35:11.765797 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetMachineName
	I0127 15:35:11.766093 1074659 buildroot.go:166] provisioning hostname "no-preload-458006"
	I0127 15:35:11.766121 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetMachineName
	I0127 15:35:11.766329 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:35:11.769250 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:11.769610 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:35:11.769642 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:11.769758 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:35:11.769927 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:35:11.770095 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:35:11.770223 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:35:11.770371 1074659 main.go:141] libmachine: Using SSH client type: native
	I0127 15:35:11.770544 1074659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0127 15:35:11.770555 1074659 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-458006 && echo "no-preload-458006" | sudo tee /etc/hostname
	I0127 15:35:11.897149 1074659 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-458006
	
	I0127 15:35:11.897206 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:35:11.900191 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:11.900525 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:35:11.900558 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:11.900692 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:35:11.900902 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:35:11.901111 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:35:11.901281 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:35:11.901465 1074659 main.go:141] libmachine: Using SSH client type: native
	I0127 15:35:11.901656 1074659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0127 15:35:11.901676 1074659 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-458006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-458006/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-458006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 15:35:12.022505 1074659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 15:35:12.022541 1074659 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20321-1005652/.minikube CaCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20321-1005652/.minikube}
	I0127 15:35:12.022567 1074659 buildroot.go:174] setting up certificates
	I0127 15:35:12.022579 1074659 provision.go:84] configureAuth start
	I0127 15:35:12.022592 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetMachineName
	I0127 15:35:12.022959 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetIP
	I0127 15:35:12.025546 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:12.025935 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:35:12.025977 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:12.026122 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:35:12.028366 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:12.028841 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:35:12.028877 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:12.029065 1074659 provision.go:143] copyHostCerts
	I0127 15:35:12.029138 1074659 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem, removing ...
	I0127 15:35:12.029163 1074659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem
	I0127 15:35:12.029230 1074659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem (1123 bytes)
	I0127 15:35:12.029342 1074659 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem, removing ...
	I0127 15:35:12.029352 1074659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem
	I0127 15:35:12.029379 1074659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem (1679 bytes)
	I0127 15:35:12.029443 1074659 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem, removing ...
	I0127 15:35:12.029456 1074659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem
	I0127 15:35:12.029478 1074659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem (1078 bytes)
	I0127 15:35:12.029591 1074659 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem org=jenkins.no-preload-458006 san=[127.0.0.1 192.168.50.30 localhost minikube no-preload-458006]
	I0127 15:35:12.137689 1074659 provision.go:177] copyRemoteCerts
	I0127 15:35:12.137749 1074659 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 15:35:12.137777 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:35:12.140625 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:12.140957 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:35:12.140999 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:12.141204 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:35:12.141437 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:35:12.141616 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:35:12.141737 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:35:12.228174 1074659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 15:35:12.255020 1074659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 15:35:12.282003 1074659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 15:35:12.307524 1074659 provision.go:87] duration metric: took 284.926507ms to configureAuth
	I0127 15:35:12.307562 1074659 buildroot.go:189] setting minikube options for container-runtime
	I0127 15:35:12.307780 1074659 config.go:182] Loaded profile config "no-preload-458006": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:35:12.307880 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:35:12.310694 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:12.311073 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:35:12.311105 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:12.311313 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:35:12.311529 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:35:12.311676 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:35:12.311817 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:35:12.311969 1074659 main.go:141] libmachine: Using SSH client type: native
	I0127 15:35:12.312174 1074659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0127 15:35:12.312196 1074659 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 15:35:12.542839 1074659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 15:35:12.542881 1074659 machine.go:96] duration metric: took 888.984939ms to provisionDockerMachine
	I0127 15:35:12.542898 1074659 start.go:293] postStartSetup for "no-preload-458006" (driver="kvm2")
	I0127 15:35:12.542913 1074659 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 15:35:12.542941 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:35:12.543242 1074659 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 15:35:12.543278 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:35:12.546082 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:12.546432 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:35:12.546461 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:12.546590 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:35:12.546796 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:35:12.546928 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:35:12.547085 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:35:12.632038 1074659 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 15:35:12.636506 1074659 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 15:35:12.636535 1074659 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/addons for local assets ...
	I0127 15:35:12.636609 1074659 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/files for local assets ...
	I0127 15:35:12.636703 1074659 filesync.go:149] local asset: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem -> 10128162.pem in /etc/ssl/certs
	I0127 15:35:12.636819 1074659 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 15:35:12.646850 1074659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:35:12.672340 1074659 start.go:296] duration metric: took 129.424031ms for postStartSetup
	I0127 15:35:12.672444 1074659 fix.go:56] duration metric: took 20.150021948s for fixHost
	I0127 15:35:12.672478 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:35:12.675204 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:12.675522 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:35:12.675554 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:12.675703 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:35:12.675904 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:35:12.676104 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:35:12.676248 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:35:12.676397 1074659 main.go:141] libmachine: Using SSH client type: native
	I0127 15:35:12.676626 1074659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.30 22 <nil> <nil>}
	I0127 15:35:12.676640 1074659 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 15:35:12.786351 1074659 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737992112.744300236
	
	I0127 15:35:12.786405 1074659 fix.go:216] guest clock: 1737992112.744300236
	I0127 15:35:12.786415 1074659 fix.go:229] Guest: 2025-01-27 15:35:12.744300236 +0000 UTC Remote: 2025-01-27 15:35:12.672452313 +0000 UTC m=+20.297470613 (delta=71.847923ms)
	I0127 15:35:12.786457 1074659 fix.go:200] guest clock delta is within tolerance: 71.847923ms
	I0127 15:35:12.786466 1074659 start.go:83] releasing machines lock for "no-preload-458006", held for 20.264082011s
	I0127 15:35:12.786501 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:35:12.786798 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetIP
	I0127 15:35:12.790060 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:12.790549 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:35:12.790587 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:12.790733 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:35:12.791284 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:35:12.791513 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:35:12.791612 1074659 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 15:35:12.791682 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:35:12.791750 1074659 ssh_runner.go:195] Run: cat /version.json
	I0127 15:35:12.791777 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:35:12.794506 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:12.794534 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:12.794938 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:35:12.794979 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:12.795008 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:35:12.795024 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:12.795146 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:35:12.795274 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:35:12.795333 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:35:12.795441 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:35:12.795548 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:35:12.795600 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:35:12.795770 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:35:12.795922 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:35:12.910133 1074659 ssh_runner.go:195] Run: systemctl --version
	I0127 15:35:12.916687 1074659 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 15:35:13.067526 1074659 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 15:35:13.074243 1074659 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 15:35:13.074333 1074659 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 15:35:13.091494 1074659 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 15:35:13.091541 1074659 start.go:495] detecting cgroup driver to use...
	I0127 15:35:13.091627 1074659 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 15:35:13.109951 1074659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 15:35:13.126231 1074659 docker.go:217] disabling cri-docker service (if available) ...
	I0127 15:35:13.126294 1074659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 15:35:13.142393 1074659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 15:35:13.158467 1074659 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 15:35:13.280749 1074659 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 15:35:13.433186 1074659 docker.go:233] disabling docker service ...
	I0127 15:35:13.433284 1074659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 15:35:13.454229 1074659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 15:35:13.471184 1074659 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 15:35:13.622591 1074659 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 15:35:13.751388 1074659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 15:35:13.766439 1074659 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 15:35:13.786832 1074659 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 15:35:13.786894 1074659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:13.798103 1074659 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 15:35:13.798184 1074659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:13.810037 1074659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:13.821544 1074659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:13.832712 1074659 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 15:35:13.849649 1074659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:13.861123 1074659 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:13.887683 1074659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:13.898790 1074659 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 15:35:13.908692 1074659 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 15:35:13.908759 1074659 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 15:35:13.922833 1074659 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 15:35:13.933292 1074659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:35:14.058981 1074659 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 15:35:14.167710 1074659 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 15:35:14.167808 1074659 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 15:35:14.172609 1074659 start.go:563] Will wait 60s for crictl version
	I0127 15:35:14.172678 1074659 ssh_runner.go:195] Run: which crictl
	I0127 15:35:14.176542 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 15:35:14.223037 1074659 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 15:35:14.223134 1074659 ssh_runner.go:195] Run: crio --version
	I0127 15:35:14.254833 1074659 ssh_runner.go:195] Run: crio --version
	I0127 15:35:14.289260 1074659 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 15:35:14.290496 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetIP
	I0127 15:35:14.293966 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:14.294508 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:35:14.294539 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:35:14.294675 1074659 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 15:35:14.299097 1074659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:35:14.312740 1074659 kubeadm.go:883] updating cluster {Name:no-preload-458006 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-458006 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.30 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 15:35:14.312860 1074659 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 15:35:14.312892 1074659 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:35:14.355613 1074659 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 15:35:14.355642 1074659 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.32.1 registry.k8s.io/kube-controller-manager:v1.32.1 registry.k8s.io/kube-scheduler:v1.32.1 registry.k8s.io/kube-proxy:v1.32.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.16-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 15:35:14.355687 1074659 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:35:14.355720 1074659 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0127 15:35:14.355738 1074659 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 15:35:14.355720 1074659 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 15:35:14.355763 1074659 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 15:35:14.355719 1074659 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 15:35:14.355792 1074659 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0127 15:35:14.355766 1074659 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 15:35:14.357594 1074659 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:35:14.357609 1074659 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 15:35:14.357616 1074659 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0127 15:35:14.357620 1074659 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 15:35:14.357632 1074659 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 15:35:14.357595 1074659 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0127 15:35:14.357604 1074659 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 15:35:14.357594 1074659 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 15:35:14.544431 1074659 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.16-0
	I0127 15:35:14.554082 1074659 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 15:35:14.558814 1074659 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0127 15:35:14.580374 1074659 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0127 15:35:14.586263 1074659 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.32.1
	I0127 15:35:14.587521 1074659 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.32.1
	I0127 15:35:14.605928 1074659 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.32.1
	I0127 15:35:14.620813 1074659 cache_images.go:116] "registry.k8s.io/etcd:3.5.16-0" needs transfer: "registry.k8s.io/etcd:3.5.16-0" does not exist at hash "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" in container runtime
	I0127 15:35:14.620874 1074659 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.16-0
	I0127 15:35:14.620933 1074659 ssh_runner.go:195] Run: which crictl
	I0127 15:35:14.665239 1074659 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.32.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.32.1" does not exist at hash "019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35" in container runtime
	I0127 15:35:14.665300 1074659 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 15:35:14.665386 1074659 ssh_runner.go:195] Run: which crictl
	I0127 15:35:14.746297 1074659 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0127 15:35:14.746363 1074659 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 15:35:14.746426 1074659 ssh_runner.go:195] Run: which crictl
	I0127 15:35:14.787102 1074659 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:35:14.855583 1074659 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.32.1" needs transfer: "registry.k8s.io/kube-proxy:v1.32.1" does not exist at hash "e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a" in container runtime
	I0127 15:35:14.855636 1074659 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 15:35:14.855649 1074659 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.32.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.32.1" does not exist at hash "2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1" in container runtime
	I0127 15:35:14.855691 1074659 ssh_runner.go:195] Run: which crictl
	I0127 15:35:14.855746 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0127 15:35:14.855758 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 15:35:14.855772 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0127 15:35:14.855689 1074659 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 15:35:14.855692 1074659 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.32.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.32.1" does not exist at hash "95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a" in container runtime
	I0127 15:35:14.855822 1074659 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0127 15:35:14.855846 1074659 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:35:14.855849 1074659 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 15:35:14.855876 1074659 ssh_runner.go:195] Run: which crictl
	I0127 15:35:14.855894 1074659 ssh_runner.go:195] Run: which crictl
	I0127 15:35:14.855823 1074659 ssh_runner.go:195] Run: which crictl
	I0127 15:35:14.870845 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.1
	I0127 15:35:14.944850 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.1
	I0127 15:35:14.944903 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0127 15:35:14.944962 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.1
	I0127 15:35:14.945083 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0127 15:35:14.945123 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 15:35:14.945158 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:35:14.993585 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.1
	I0127 15:35:15.123191 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.1
	I0127 15:35:15.123248 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0127 15:35:15.123329 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:35:15.123422 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.1
	I0127 15:35:15.123456 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0127 15:35:15.123544 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 15:35:15.128154 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.1
	I0127 15:35:15.284399 1074659 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0127 15:35:15.284500 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:35:15.284518 1074659 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0127 15:35:15.284535 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.1
	I0127 15:35:15.289399 1074659 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1
	I0127 15:35:15.289452 1074659 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0127 15:35:15.289454 1074659 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.1
	I0127 15:35:15.289525 1074659 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.1
	I0127 15:35:15.289536 1074659 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0
	I0127 15:35:15.293337 1074659 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1
	I0127 15:35:15.293442 1074659 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.1
	I0127 15:35:15.358728 1074659 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0127 15:35:15.358802 1074659 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0127 15:35:15.358828 1074659 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0127 15:35:15.358878 1074659 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0127 15:35:15.358800 1074659 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1
	I0127 15:35:15.358912 1074659 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0127 15:35:15.358986 1074659 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.1
	I0127 15:35:15.372811 1074659 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1
	I0127 15:35:15.372931 1074659 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.32.1 (exists)
	I0127 15:35:15.372958 1074659 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.1
	I0127 15:35:15.372969 1074659 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.16-0 (exists)
	I0127 15:35:15.372984 1074659 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.32.1 (exists)
	I0127 15:35:17.461554 1074659 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.102643487s)
	I0127 15:35:17.461593 1074659 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0127 15:35:17.461624 1074659 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.32.1
	I0127 15:35:17.461632 1074659 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.10269372s)
	I0127 15:35:17.461674 1074659 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0127 15:35:17.461687 1074659 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.1
	I0127 15:35:17.461736 1074659 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.1: (2.088752632s)
	I0127 15:35:17.461767 1074659 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.32.1 (exists)
	I0127 15:35:17.461685 1074659 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.1: (2.102672204s)
	I0127 15:35:17.461792 1074659 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.32.1 (exists)
	I0127 15:35:19.869502 1074659 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.1: (2.407751047s)
	I0127 15:35:19.869555 1074659 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 from cache
	I0127 15:35:19.869587 1074659 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.16-0
	I0127 15:35:19.869643 1074659 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0
	I0127 15:35:23.574652 1074659 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0: (3.704975516s)
	I0127 15:35:23.574688 1074659 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 from cache
	I0127 15:35:23.574722 1074659 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.32.1
	I0127 15:35:23.574780 1074659 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.1
	I0127 15:35:25.847899 1074659 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.1: (2.273063675s)
	I0127 15:35:25.847934 1074659 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 from cache
	I0127 15:35:25.847966 1074659 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0127 15:35:25.848022 1074659 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0127 15:35:26.608793 1074659 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0127 15:35:26.608843 1074659 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.32.1
	I0127 15:35:26.608912 1074659 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.1
	I0127 15:35:28.068802 1074659 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.1: (1.459858185s)
	I0127 15:35:28.068853 1074659 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 from cache
	I0127 15:35:28.068896 1074659 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.32.1
	I0127 15:35:28.068964 1074659 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.1
	I0127 15:35:30.133857 1074659 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.1: (2.064859397s)
	I0127 15:35:30.133897 1074659 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 from cache
	I0127 15:35:30.133924 1074659 cache_images.go:123] Successfully loaded all cached images
	I0127 15:35:30.133929 1074659 cache_images.go:92] duration metric: took 15.778275775s to LoadCachedImages
	I0127 15:35:30.133939 1074659 kubeadm.go:934] updating node { 192.168.50.30 8443 v1.32.1 crio true true} ...
	I0127 15:35:30.134050 1074659 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-458006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:no-preload-458006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 15:35:30.134130 1074659 ssh_runner.go:195] Run: crio config
	I0127 15:35:30.183366 1074659 cni.go:84] Creating CNI manager for ""
	I0127 15:35:30.183392 1074659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:35:30.183403 1074659 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 15:35:30.183432 1074659 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.30 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-458006 NodeName:no-preload-458006 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 15:35:30.183649 1074659 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-458006"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.30"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.30"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 15:35:30.183758 1074659 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 15:35:30.194095 1074659 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 15:35:30.194162 1074659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 15:35:30.203546 1074659 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0127 15:35:30.220491 1074659 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 15:35:30.237369 1074659 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0127 15:35:30.254307 1074659 ssh_runner.go:195] Run: grep 192.168.50.30	control-plane.minikube.internal$ /etc/hosts
	I0127 15:35:30.258453 1074659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:35:30.271467 1074659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:35:30.406607 1074659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:35:30.424524 1074659 certs.go:68] Setting up /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/no-preload-458006 for IP: 192.168.50.30
	I0127 15:35:30.424551 1074659 certs.go:194] generating shared ca certs ...
	I0127 15:35:30.424573 1074659 certs.go:226] acquiring lock for ca certs: {Name:mk0f815247e4bef2bde3ddcae95639d1cd9cab24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:35:30.424753 1074659 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key
	I0127 15:35:30.424818 1074659 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key
	I0127 15:35:30.424830 1074659 certs.go:256] generating profile certs ...
	I0127 15:35:30.424946 1074659 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/no-preload-458006/client.key
	I0127 15:35:30.425029 1074659 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/no-preload-458006/apiserver.key.11005ac9
	I0127 15:35:30.425091 1074659 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/no-preload-458006/proxy-client.key
	I0127 15:35:30.425333 1074659 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem (1338 bytes)
	W0127 15:35:30.425428 1074659 certs.go:480] ignoring /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816_empty.pem, impossibly tiny 0 bytes
	I0127 15:35:30.425446 1074659 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 15:35:30.425480 1074659 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem (1078 bytes)
	I0127 15:35:30.425510 1074659 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem (1123 bytes)
	I0127 15:35:30.425528 1074659 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem (1679 bytes)
	I0127 15:35:30.425573 1074659 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:35:30.426187 1074659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 15:35:30.453253 1074659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 15:35:30.493600 1074659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 15:35:30.526657 1074659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 15:35:30.562654 1074659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/no-preload-458006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 15:35:30.598226 1074659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/no-preload-458006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 15:35:30.624232 1074659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/no-preload-458006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 15:35:30.648452 1074659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/no-preload-458006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 15:35:30.672709 1074659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem --> /usr/share/ca-certificates/1012816.pem (1338 bytes)
	I0127 15:35:30.696239 1074659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /usr/share/ca-certificates/10128162.pem (1708 bytes)
	I0127 15:35:30.721830 1074659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 15:35:30.746480 1074659 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 15:35:30.763870 1074659 ssh_runner.go:195] Run: openssl version
	I0127 15:35:30.769867 1074659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1012816.pem && ln -fs /usr/share/ca-certificates/1012816.pem /etc/ssl/certs/1012816.pem"
	I0127 15:35:30.781637 1074659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1012816.pem
	I0127 15:35:30.786326 1074659 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 14:20 /usr/share/ca-certificates/1012816.pem
	I0127 15:35:30.786372 1074659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1012816.pem
	I0127 15:35:30.792260 1074659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1012816.pem /etc/ssl/certs/51391683.0"
	I0127 15:35:30.803315 1074659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10128162.pem && ln -fs /usr/share/ca-certificates/10128162.pem /etc/ssl/certs/10128162.pem"
	I0127 15:35:30.814572 1074659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10128162.pem
	I0127 15:35:30.819275 1074659 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 14:20 /usr/share/ca-certificates/10128162.pem
	I0127 15:35:30.819323 1074659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10128162.pem
	I0127 15:35:30.825401 1074659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10128162.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 15:35:30.836891 1074659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 15:35:30.848385 1074659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:35:30.853123 1074659 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 14:06 /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:35:30.853188 1074659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:35:30.859028 1074659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 15:35:30.871103 1074659 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 15:35:30.875716 1074659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 15:35:30.881802 1074659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 15:35:30.887572 1074659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 15:35:30.893588 1074659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 15:35:30.899521 1074659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 15:35:30.905473 1074659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 15:35:30.911346 1074659 kubeadm.go:392] StartCluster: {Name:no-preload-458006 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-458006 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.30 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:35:30.911460 1074659 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 15:35:30.911509 1074659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:35:30.953585 1074659 cri.go:89] found id: ""
	I0127 15:35:30.953669 1074659 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 15:35:30.964394 1074659 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 15:35:30.964417 1074659 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 15:35:30.964473 1074659 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 15:35:30.974571 1074659 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 15:35:30.975176 1074659 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-458006" does not appear in /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:35:30.975440 1074659 kubeconfig.go:62] /home/jenkins/minikube-integration/20321-1005652/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-458006" cluster setting kubeconfig missing "no-preload-458006" context setting]
	I0127 15:35:30.975992 1074659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:35:30.977443 1074659 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 15:35:30.987824 1074659 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.30
	I0127 15:35:30.987859 1074659 kubeadm.go:1160] stopping kube-system containers ...
	I0127 15:35:30.987872 1074659 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 15:35:30.987914 1074659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:35:31.028590 1074659 cri.go:89] found id: ""
	I0127 15:35:31.028664 1074659 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 15:35:31.047276 1074659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:35:31.057673 1074659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:35:31.057700 1074659 kubeadm.go:157] found existing configuration files:
	
	I0127 15:35:31.057762 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:35:31.067277 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:35:31.067330 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:35:31.077402 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:35:31.086991 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:35:31.087048 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:35:31.097564 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:35:31.107190 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:35:31.107248 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:35:31.117787 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:35:31.127887 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:35:31.127964 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:35:31.138422 1074659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:35:31.148723 1074659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:35:31.263909 1074659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:35:32.183215 1074659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:35:32.433814 1074659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:35:32.525054 1074659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:35:32.618236 1074659 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:35:32.618342 1074659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:35:33.118419 1074659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:35:33.618399 1074659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:35:33.637766 1074659 api_server.go:72] duration metric: took 1.019531531s to wait for apiserver process to appear ...
	I0127 15:35:33.637792 1074659 api_server.go:88] waiting for apiserver healthz status ...
	I0127 15:35:33.637814 1074659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8443/healthz ...
	I0127 15:35:33.638463 1074659 api_server.go:269] stopped: https://192.168.50.30:8443/healthz: Get "https://192.168.50.30:8443/healthz": dial tcp 192.168.50.30:8443: connect: connection refused
	I0127 15:35:34.138107 1074659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8443/healthz ...
	I0127 15:35:36.938777 1074659 api_server.go:279] https://192.168.50.30:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 15:35:36.938823 1074659 api_server.go:103] status: https://192.168.50.30:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 15:35:36.938846 1074659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8443/healthz ...
	I0127 15:35:36.948345 1074659 api_server.go:279] https://192.168.50.30:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 15:35:36.948381 1074659 api_server.go:103] status: https://192.168.50.30:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 15:35:37.138763 1074659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8443/healthz ...
	I0127 15:35:37.145420 1074659 api_server.go:279] https://192.168.50.30:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 15:35:37.145454 1074659 api_server.go:103] status: https://192.168.50.30:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 15:35:37.638487 1074659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8443/healthz ...
	I0127 15:35:37.652383 1074659 api_server.go:279] https://192.168.50.30:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 15:35:37.652437 1074659 api_server.go:103] status: https://192.168.50.30:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 15:35:38.137980 1074659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8443/healthz ...
	I0127 15:35:38.147862 1074659 api_server.go:279] https://192.168.50.30:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 15:35:38.147903 1074659 api_server.go:103] status: https://192.168.50.30:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 15:35:38.638217 1074659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8443/healthz ...
	I0127 15:35:38.643697 1074659 api_server.go:279] https://192.168.50.30:8443/healthz returned 200:
	ok
	I0127 15:35:38.651769 1074659 api_server.go:141] control plane version: v1.32.1
	I0127 15:35:38.651810 1074659 api_server.go:131] duration metric: took 5.014009684s to wait for apiserver health ...
	I0127 15:35:38.651822 1074659 cni.go:84] Creating CNI manager for ""
	I0127 15:35:38.651832 1074659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:35:38.653555 1074659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 15:35:38.655033 1074659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 15:35:38.673586 1074659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 15:35:38.704411 1074659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 15:35:38.723787 1074659 system_pods.go:59] 8 kube-system pods found
	I0127 15:35:38.723945 1074659 system_pods.go:61] "coredns-668d6bf9bc-2xd4n" [301d14d3-cd51-4ac5-94f5-bcf1c1f5b07b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 15:35:38.723998 1074659 system_pods.go:61] "etcd-no-preload-458006" [a6c97ccd-aff0-4db9-8128-63216c58990c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 15:35:38.724055 1074659 system_pods.go:61] "kube-apiserver-no-preload-458006" [dc516053-58a1-4053-92ee-b664180fc61d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 15:35:38.724080 1074659 system_pods.go:61] "kube-controller-manager-no-preload-458006" [5ba6425f-87cc-464b-bd3d-2aa6fcb9a891] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 15:35:38.724110 1074659 system_pods.go:61] "kube-proxy-nsgrv" [34ecc483-7d2d-4f9a-a013-6f83dd7978fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 15:35:38.724148 1074659 system_pods.go:61] "kube-scheduler-no-preload-458006" [c4e46fde-1591-43d6-9570-b29373256567] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 15:35:38.724168 1074659 system_pods.go:61] "metrics-server-f79f97bbb-cnfrq" [3586c6be-7a74-42e1-ac24-331a782510db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 15:35:38.724184 1074659 system_pods.go:61] "storage-provisioner" [45058bc2-f975-40b5-bc5c-5be1ddec01c6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 15:35:38.724202 1074659 system_pods.go:74] duration metric: took 19.759545ms to wait for pod list to return data ...
	I0127 15:35:38.724243 1074659 node_conditions.go:102] verifying NodePressure condition ...
	I0127 15:35:38.737640 1074659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 15:35:38.737779 1074659 node_conditions.go:123] node cpu capacity is 2
	I0127 15:35:38.737848 1074659 node_conditions.go:105] duration metric: took 13.588233ms to run NodePressure ...
	I0127 15:35:38.737886 1074659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:35:39.203257 1074659 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 15:35:39.223965 1074659 kubeadm.go:739] kubelet initialised
	I0127 15:35:39.223999 1074659 kubeadm.go:740] duration metric: took 20.635488ms waiting for restarted kubelet to initialise ...
	I0127 15:35:39.224013 1074659 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:35:39.238142 1074659 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-2xd4n" in "kube-system" namespace to be "Ready" ...
	I0127 15:35:41.245989 1074659 pod_ready.go:103] pod "coredns-668d6bf9bc-2xd4n" in "kube-system" namespace has status "Ready":"False"
	I0127 15:35:43.648499 1074659 pod_ready.go:103] pod "coredns-668d6bf9bc-2xd4n" in "kube-system" namespace has status "Ready":"False"
	I0127 15:35:44.244191 1074659 pod_ready.go:93] pod "coredns-668d6bf9bc-2xd4n" in "kube-system" namespace has status "Ready":"True"
	I0127 15:35:44.244222 1074659 pod_ready.go:82] duration metric: took 5.00604425s for pod "coredns-668d6bf9bc-2xd4n" in "kube-system" namespace to be "Ready" ...
	I0127 15:35:44.244233 1074659 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:35:46.251354 1074659 pod_ready.go:103] pod "etcd-no-preload-458006" in "kube-system" namespace has status "Ready":"False"
	I0127 15:35:46.750926 1074659 pod_ready.go:93] pod "etcd-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:35:46.750959 1074659 pod_ready.go:82] duration metric: took 2.506717778s for pod "etcd-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:35:46.750976 1074659 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:35:48.758279 1074659 pod_ready.go:103] pod "kube-apiserver-no-preload-458006" in "kube-system" namespace has status "Ready":"False"
	I0127 15:35:50.257658 1074659 pod_ready.go:93] pod "kube-apiserver-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:35:50.257685 1074659 pod_ready.go:82] duration metric: took 3.5067009s for pod "kube-apiserver-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:35:50.257695 1074659 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:35:50.263214 1074659 pod_ready.go:93] pod "kube-controller-manager-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:35:50.263236 1074659 pod_ready.go:82] duration metric: took 5.534263ms for pod "kube-controller-manager-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:35:50.263245 1074659 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-nsgrv" in "kube-system" namespace to be "Ready" ...
	I0127 15:35:50.268604 1074659 pod_ready.go:93] pod "kube-proxy-nsgrv" in "kube-system" namespace has status "Ready":"True"
	I0127 15:35:50.268634 1074659 pod_ready.go:82] duration metric: took 5.381144ms for pod "kube-proxy-nsgrv" in "kube-system" namespace to be "Ready" ...
	I0127 15:35:50.268647 1074659 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:35:52.280583 1074659 pod_ready.go:93] pod "kube-scheduler-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:35:52.280607 1074659 pod_ready.go:82] duration metric: took 2.011951871s for pod "kube-scheduler-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:35:52.280616 1074659 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace to be "Ready" ...
	I0127 15:35:54.287458 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:35:56.788194 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:35:58.789190 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:01.292195 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:03.788020 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:05.789354 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:08.287870 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:10.787111 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:12.789686 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:15.285778 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:17.287039 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:19.787888 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:22.287510 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:24.287915 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:26.786726 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:28.788441 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:31.286755 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:33.288619 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:35.787073 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:38.286474 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:40.286852 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:42.788266 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:45.287518 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:47.287564 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:49.287939 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:51.787075 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:53.787842 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:56.287368 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:58.786805 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:00.787691 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:02.787941 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:05.286886 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:07.287145 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:09.288705 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:11.787109 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:13.787189 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:15.787266 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:18.287397 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:20.787551 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:23.287781 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:25.287886 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:27.787881 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:29.788078 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:32.287406 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:34.787934 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:37.287162 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:39.787245 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:41.787908 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:44.286866 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:46.287655 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:48.786182 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:50.787395 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:52.787704 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:55.286967 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:57.289263 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:59.787393 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:01.789117 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:04.290371 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:06.787711 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:08.788730 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:11.289383 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:13.788914 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:16.287163 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:18.287782 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:20.288830 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:22.786853 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:24.787379 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:26.788848 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:29.287085 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:31.288269 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:33.788390 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:36.287173 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:38.287892 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:40.787491 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:42.787697 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:45.287260 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:47.287367 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:49.287859 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:51.288012 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:53.288532 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:55.788221 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:58.288309 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:00.786850 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:02.787929 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:05.287833 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:07.287889 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:09.289282 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:11.788208 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:14.287327 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:16.288546 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:18.787976 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:20.788184 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:23.287582 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:25.787381 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:27.787632 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:30.287493 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:32.289889 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:34.787461 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:37.287358 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:39.287413 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:41.287958 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:43.787400 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:45.787456 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:47.788330 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:50.288398 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:52.281192 1074659 pod_ready.go:82] duration metric: took 4m0.000550048s for pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace to be "Ready" ...
	E0127 15:39:52.281240 1074659 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 15:39:52.281264 1074659 pod_ready.go:39] duration metric: took 4m13.057238138s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:39:52.281309 1074659 kubeadm.go:597] duration metric: took 4m21.316884653s to restartPrimaryControlPlane
	W0127 15:39:52.281435 1074659 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 15:39:52.281477 1074659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:40:20.131059 1074659 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.849552205s)
	I0127 15:40:20.131159 1074659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:40:20.154965 1074659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:40:20.170718 1074659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:40:20.182783 1074659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:40:20.182813 1074659 kubeadm.go:157] found existing configuration files:
	
	I0127 15:40:20.182879 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:40:20.196772 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:40:20.196838 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:40:20.219107 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:40:20.231548 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:40:20.231633 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:40:20.243226 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:40:20.262500 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:40:20.262565 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:40:20.273568 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:40:20.283606 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:40:20.283675 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:40:20.294389 1074659 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:40:20.475280 1074659 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:40:28.833666 1074659 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 15:40:28.833746 1074659 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:40:28.833840 1074659 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:40:28.833927 1074659 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:40:28.834008 1074659 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 15:40:28.834082 1074659 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:40:28.835576 1074659 out.go:235]   - Generating certificates and keys ...
	I0127 15:40:28.835644 1074659 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:40:28.835701 1074659 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:40:28.835776 1074659 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:40:28.835840 1074659 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:40:28.835918 1074659 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:40:28.835984 1074659 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:40:28.836079 1074659 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:40:28.836170 1074659 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:40:28.836279 1074659 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:40:28.836382 1074659 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:40:28.836440 1074659 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:40:28.836506 1074659 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:40:28.836564 1074659 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:40:28.836645 1074659 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 15:40:28.836728 1074659 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:40:28.836800 1074659 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:40:28.836889 1074659 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:40:28.836973 1074659 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:40:28.837079 1074659 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:40:28.838668 1074659 out.go:235]   - Booting up control plane ...
	I0127 15:40:28.838772 1074659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:40:28.838882 1074659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:40:28.838967 1074659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:40:28.839120 1074659 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:40:28.839212 1074659 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:40:28.839261 1074659 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:40:28.839412 1074659 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 15:40:28.839527 1074659 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 15:40:28.839621 1074659 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.133738ms
	I0127 15:40:28.839718 1074659 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 15:40:28.839793 1074659 kubeadm.go:310] [api-check] The API server is healthy after 5.001467165s
	I0127 15:40:28.839883 1074659 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 15:40:28.840019 1074659 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 15:40:28.840098 1074659 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 15:40:28.840257 1074659 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-458006 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 15:40:28.840304 1074659 kubeadm.go:310] [bootstrap-token] Using token: ysn4g1.5k9s54b5xvzc8py2
	I0127 15:40:28.841707 1074659 out.go:235]   - Configuring RBAC rules ...
	I0127 15:40:28.841821 1074659 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 15:40:28.841908 1074659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 15:40:28.842072 1074659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 15:40:28.842254 1074659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 15:40:28.842425 1074659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 15:40:28.842542 1074659 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 15:40:28.842654 1074659 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 15:40:28.842695 1074659 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 15:40:28.842739 1074659 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 15:40:28.842746 1074659 kubeadm.go:310] 
	I0127 15:40:28.842794 1074659 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 15:40:28.842803 1074659 kubeadm.go:310] 
	I0127 15:40:28.842866 1074659 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 15:40:28.842878 1074659 kubeadm.go:310] 
	I0127 15:40:28.842923 1074659 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 15:40:28.843010 1074659 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 15:40:28.843103 1074659 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 15:40:28.843112 1074659 kubeadm.go:310] 
	I0127 15:40:28.843207 1074659 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 15:40:28.843222 1074659 kubeadm.go:310] 
	I0127 15:40:28.843297 1074659 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 15:40:28.843312 1074659 kubeadm.go:310] 
	I0127 15:40:28.843389 1074659 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 15:40:28.843486 1074659 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 15:40:28.843560 1074659 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 15:40:28.843568 1074659 kubeadm.go:310] 
	I0127 15:40:28.843641 1074659 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 15:40:28.843710 1074659 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 15:40:28.843716 1074659 kubeadm.go:310] 
	I0127 15:40:28.843788 1074659 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ysn4g1.5k9s54b5xvzc8py2 \
	I0127 15:40:28.843875 1074659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 \
	I0127 15:40:28.843899 1074659 kubeadm.go:310] 	--control-plane 
	I0127 15:40:28.843908 1074659 kubeadm.go:310] 
	I0127 15:40:28.844015 1074659 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 15:40:28.844024 1074659 kubeadm.go:310] 
	I0127 15:40:28.844090 1074659 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ysn4g1.5k9s54b5xvzc8py2 \
	I0127 15:40:28.844200 1074659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 
	I0127 15:40:28.844221 1074659 cni.go:84] Creating CNI manager for ""
	I0127 15:40:28.844233 1074659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:40:28.845800 1074659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 15:40:28.847251 1074659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 15:40:28.858165 1074659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 15:40:28.881328 1074659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 15:40:28.881400 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:28.881455 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-458006 minikube.k8s.io/updated_at=2025_01_27T15_40_28_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743 minikube.k8s.io/name=no-preload-458006 minikube.k8s.io/primary=true
	I0127 15:40:28.897996 1074659 ops.go:34] apiserver oom_adj: -16
	I0127 15:40:29.095553 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:29.596344 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:30.096320 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:30.596512 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:31.096689 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:31.596534 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:32.096361 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:32.595892 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:33.095702 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:33.238790 1074659 kubeadm.go:1113] duration metric: took 4.357463541s to wait for elevateKubeSystemPrivileges
	I0127 15:40:33.238848 1074659 kubeadm.go:394] duration metric: took 5m2.327511742s to StartCluster
	I0127 15:40:33.238888 1074659 settings.go:142] acquiring lock: {Name:mk76722a8563186e0a733747d866232f054026c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:40:33.239099 1074659 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:40:33.240861 1074659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:40:33.241710 1074659 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.30 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 15:40:33.241765 1074659 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 15:40:33.241896 1074659 addons.go:69] Setting storage-provisioner=true in profile "no-preload-458006"
	I0127 15:40:33.241924 1074659 addons.go:238] Setting addon storage-provisioner=true in "no-preload-458006"
	W0127 15:40:33.241936 1074659 addons.go:247] addon storage-provisioner should already be in state true
	I0127 15:40:33.241970 1074659 config.go:182] Loaded profile config "no-preload-458006": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:40:33.241993 1074659 host.go:66] Checking if "no-preload-458006" exists ...
	I0127 15:40:33.242098 1074659 addons.go:69] Setting default-storageclass=true in profile "no-preload-458006"
	I0127 15:40:33.242136 1074659 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-458006"
	I0127 15:40:33.242491 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.242558 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.242562 1074659 addons.go:69] Setting dashboard=true in profile "no-preload-458006"
	I0127 15:40:33.242579 1074659 addons.go:238] Setting addon dashboard=true in "no-preload-458006"
	W0127 15:40:33.242587 1074659 addons.go:247] addon dashboard should already be in state true
	I0127 15:40:33.242619 1074659 host.go:66] Checking if "no-preload-458006" exists ...
	I0127 15:40:33.242642 1074659 addons.go:69] Setting metrics-server=true in profile "no-preload-458006"
	I0127 15:40:33.242681 1074659 addons.go:238] Setting addon metrics-server=true in "no-preload-458006"
	W0127 15:40:33.242703 1074659 addons.go:247] addon metrics-server should already be in state true
	I0127 15:40:33.242748 1074659 host.go:66] Checking if "no-preload-458006" exists ...
	I0127 15:40:33.242982 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.243002 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.243017 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.243038 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.243162 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.243195 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.246220 1074659 out.go:177] * Verifying Kubernetes components...
	I0127 15:40:33.247844 1074659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:40:33.260866 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42951
	I0127 15:40:33.260900 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35127
	I0127 15:40:33.260867 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I0127 15:40:33.261687 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.261705 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.261805 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39555
	I0127 15:40:33.262293 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.262298 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.262311 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.262320 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.262394 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.262663 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.262770 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.262824 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.262973 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.262988 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.263265 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.263294 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.263301 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.263705 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.263777 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.263793 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.264103 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.264138 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.264160 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.265173 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.265220 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.266841 1074659 addons.go:238] Setting addon default-storageclass=true in "no-preload-458006"
	W0127 15:40:33.266861 1074659 addons.go:247] addon default-storageclass should already be in state true
	I0127 15:40:33.266882 1074659 host.go:66] Checking if "no-preload-458006" exists ...
	I0127 15:40:33.267142 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.267186 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.284237 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34477
	I0127 15:40:33.284787 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.285432 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.285458 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.285817 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.286054 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.288006 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:40:33.288915 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36501
	I0127 15:40:33.289278 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44703
	I0127 15:40:33.289464 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.289551 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.290021 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.290033 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.290128 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.290135 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.290430 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.290487 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.290488 1074659 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 15:40:33.290680 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.290956 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.293313 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:40:33.293608 1074659 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 15:40:33.293756 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:40:33.295556 1074659 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 15:40:33.295557 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 15:40:33.295679 1074659 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 15:40:33.295688 1074659 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:40:33.295709 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:40:33.297475 1074659 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 15:40:33.297501 1074659 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 15:40:33.297523 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:40:33.300714 1074659 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:40:33.300736 1074659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 15:40:33.300756 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:40:33.301635 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38107
	I0127 15:40:33.302333 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.302863 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.302880 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.303349 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.303970 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.304013 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.305284 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.305834 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:40:33.305864 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.306025 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:40:33.306086 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.306246 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:40:33.306406 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:40:33.306488 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.306592 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:40:33.309540 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:40:33.309565 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.309810 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:40:33.310021 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:40:33.310146 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:40:33.310163 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.310320 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:40:33.310404 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:40:33.310566 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:40:33.310593 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:40:33.310786 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:40:33.310945 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:40:33.329960 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0127 15:40:33.330745 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.331477 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.331497 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.331931 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.332248 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.334148 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:40:33.337343 1074659 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 15:40:33.337364 1074659 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 15:40:33.337387 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:40:33.344679 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.345163 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:40:33.345261 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.345521 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:40:33.345738 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:40:33.345938 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:40:33.346117 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:40:33.464899 1074659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:40:33.489798 1074659 node_ready.go:35] waiting up to 6m0s for node "no-preload-458006" to be "Ready" ...
	I0127 15:40:33.523407 1074659 node_ready.go:49] node "no-preload-458006" has status "Ready":"True"
	I0127 15:40:33.523440 1074659 node_ready.go:38] duration metric: took 33.61111ms for node "no-preload-458006" to be "Ready" ...
	I0127 15:40:33.523453 1074659 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:33.535257 1074659 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:33.568512 1074659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 15:40:33.587974 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 15:40:33.588003 1074659 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 15:40:33.619075 1074659 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 15:40:33.619099 1074659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 15:40:33.633023 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 15:40:33.633068 1074659 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 15:40:33.642970 1074659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:40:33.657566 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 15:40:33.657595 1074659 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 15:40:33.664558 1074659 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 15:40:33.664588 1074659 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 15:40:33.687856 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 15:40:33.687883 1074659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 15:40:33.714005 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 15:40:33.714036 1074659 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 15:40:33.727527 1074659 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:40:33.727554 1074659 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 15:40:33.764439 1074659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:40:33.790606 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 15:40:33.790639 1074659 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 15:40:33.826641 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:33.826674 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:33.827044 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:33.827065 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:33.827075 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:33.827083 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:33.827331 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:33.827363 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:33.827373 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:33.834226 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:33.834269 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:33.834561 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:33.834578 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:33.867815 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 15:40:33.867848 1074659 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 15:40:33.891318 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 15:40:33.891362 1074659 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 15:40:33.964578 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:40:33.964616 1074659 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 15:40:34.002418 1074659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:40:34.279743 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:34.279829 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:34.280331 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:34.280397 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:34.280425 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:34.280447 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:34.280473 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:34.280769 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:34.280818 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:34.280833 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:34.817958 1074659 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.053479215s)
	I0127 15:40:34.818069 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:34.818092 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:34.818435 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:34.818495 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:34.818509 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:34.818518 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:34.818778 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:34.818799 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:34.818811 1074659 addons.go:479] Verifying addon metrics-server=true in "no-preload-458006"
	I0127 15:40:35.547309 1074659 pod_ready.go:103] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:36.514576 1074659 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.512097478s)
	I0127 15:40:36.514647 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:36.514666 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:36.515033 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:36.515046 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:36.515111 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:36.515130 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:36.515153 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:36.515488 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:36.515527 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:36.515503 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:36.517645 1074659 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-458006 addons enable metrics-server
	
	I0127 15:40:36.519535 1074659 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 15:40:36.520964 1074659 addons.go:514] duration metric: took 3.279215802s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 15:40:38.042609 1074659 pod_ready.go:103] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:40.046811 1074659 pod_ready.go:103] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:42.547331 1074659 pod_ready.go:103] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:44.081830 1074659 pod_ready.go:93] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.081865 1074659 pod_ready.go:82] duration metric: took 10.546579527s for pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.081882 1074659 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-xgx78" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.097962 1074659 pod_ready.go:93] pod "coredns-668d6bf9bc-xgx78" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.097994 1074659 pod_ready.go:82] duration metric: took 16.102725ms for pod "coredns-668d6bf9bc-xgx78" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.098014 1074659 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.117810 1074659 pod_ready.go:93] pod "etcd-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.117845 1074659 pod_ready.go:82] duration metric: took 19.821766ms for pod "etcd-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.117861 1074659 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.147522 1074659 pod_ready.go:93] pod "kube-apiserver-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.147557 1074659 pod_ready.go:82] duration metric: took 29.685956ms for pod "kube-apiserver-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.147573 1074659 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.163535 1074659 pod_ready.go:93] pod "kube-controller-manager-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.163570 1074659 pod_ready.go:82] duration metric: took 15.987018ms for pod "kube-controller-manager-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.163585 1074659 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6j6r5" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.440133 1074659 pod_ready.go:93] pod "kube-proxy-6j6r5" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.440165 1074659 pod_ready.go:82] duration metric: took 276.571766ms for pod "kube-proxy-6j6r5" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.440180 1074659 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.865610 1074659 pod_ready.go:93] pod "kube-scheduler-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.865643 1074659 pod_ready.go:82] duration metric: took 425.453541ms for pod "kube-scheduler-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.865655 1074659 pod_ready.go:39] duration metric: took 11.34218973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:44.865682 1074659 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:40:44.865746 1074659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:44.906758 1074659 api_server.go:72] duration metric: took 11.665005612s to wait for apiserver process to appear ...
	I0127 15:40:44.906793 1074659 api_server.go:88] waiting for apiserver healthz status ...
	I0127 15:40:44.906829 1074659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8443/healthz ...
	I0127 15:40:44.912296 1074659 api_server.go:279] https://192.168.50.30:8443/healthz returned 200:
	ok
	I0127 15:40:44.913396 1074659 api_server.go:141] control plane version: v1.32.1
	I0127 15:40:44.913416 1074659 api_server.go:131] duration metric: took 6.606206ms to wait for apiserver health ...
	I0127 15:40:44.913424 1074659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 15:40:45.045967 1074659 system_pods.go:59] 9 kube-system pods found
	I0127 15:40:45.046012 1074659 system_pods.go:61] "coredns-668d6bf9bc-sp7p4" [7fbb8eca-e2e6-4760-a0b6-8c6387fe9960] Running
	I0127 15:40:45.046020 1074659 system_pods.go:61] "coredns-668d6bf9bc-xgx78" [c3cc3887-d694-4b39-9ad1-c03fcf97b608] Running
	I0127 15:40:45.046025 1074659 system_pods.go:61] "etcd-no-preload-458006" [2474c045-aaa4-4190-8392-3dea1976ded1] Running
	I0127 15:40:45.046031 1074659 system_pods.go:61] "kube-apiserver-no-preload-458006" [2529a3ec-c6a0-4cc7-b93a-7964e435ada0] Running
	I0127 15:40:45.046038 1074659 system_pods.go:61] "kube-controller-manager-no-preload-458006" [989d2483-4dc3-4add-ad64-7f76d4b5c765] Running
	I0127 15:40:45.046043 1074659 system_pods.go:61] "kube-proxy-6j6r5" [3ca06a87-654b-42c2-ac04-12d9b0472973] Running
	I0127 15:40:45.046047 1074659 system_pods.go:61] "kube-scheduler-no-preload-458006" [f6afe797-0eed-4f54-8ed6-fbe75d411b7a] Running
	I0127 15:40:45.046056 1074659 system_pods.go:61] "metrics-server-f79f97bbb-k7879" [137f45e8-cf1d-404b-af06-4b99a257450f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 15:40:45.046063 1074659 system_pods.go:61] "storage-provisioner" [8e874460-b5bf-4ce6-b1ca-9c188b1fd4e6] Running
	I0127 15:40:45.046074 1074659 system_pods.go:74] duration metric: took 132.642132ms to wait for pod list to return data ...
	I0127 15:40:45.046089 1074659 default_sa.go:34] waiting for default service account to be created ...
	I0127 15:40:45.246663 1074659 default_sa.go:45] found service account: "default"
	I0127 15:40:45.246694 1074659 default_sa.go:55] duration metric: took 200.600423ms for default service account to be created ...
	I0127 15:40:45.246707 1074659 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 15:40:45.449871 1074659 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-458006 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-458006 -n no-preload-458006
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-458006 logs -n 25
E0127 16:01:39.907505 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-458006 logs -n 25: (1.575315675s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-230388 sudo                                  | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-230388 sudo                                  | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-230388 sudo find                             | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-230388 sudo crio                             | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-230388                                       | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-147179 | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | disable-driver-mounts-147179                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:33 UTC |
	|         | default-k8s-diff-port-912913                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-458006             | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-458006                                   | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-349782            | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-349782                                  | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:35 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-912913  | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC | 27 Jan 25 15:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC | 27 Jan 25 15:35 UTC |
	|         | default-k8s-diff-port-912913                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-458006                  | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC | 27 Jan 25 15:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-458006                                   | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-349782                 | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC | 27 Jan 25 15:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-349782                                  | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-912913       | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC | 27 Jan 25 15:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC |                     |
	|         | default-k8s-diff-port-912913                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-405706        | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:36 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-405706                              | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:37 UTC | 27 Jan 25 15:37 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-405706             | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:37 UTC | 27 Jan 25 15:37 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-405706                              | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-405706                              | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 16:01 UTC | 27 Jan 25 16:01 UTC |
	| start   | -p newest-cni-964010 --memory=2200 --alsologtostderr   | newest-cni-964010            | jenkins | v1.35.0 | 27 Jan 25 16:01 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 16:01:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 16:01:14.945464 1081508 out.go:345] Setting OutFile to fd 1 ...
	I0127 16:01:14.945630 1081508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 16:01:14.945642 1081508 out.go:358] Setting ErrFile to fd 2...
	I0127 16:01:14.945648 1081508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 16:01:14.945835 1081508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 16:01:14.947137 1081508 out.go:352] Setting JSON to false
	I0127 16:01:14.948527 1081508 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":24222,"bootTime":1737969453,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 16:01:14.948635 1081508 start.go:139] virtualization: kvm guest
	I0127 16:01:14.950667 1081508 out.go:177] * [newest-cni-964010] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 16:01:14.952036 1081508 notify.go:220] Checking for updates...
	I0127 16:01:14.952052 1081508 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 16:01:14.953444 1081508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 16:01:14.954709 1081508 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 16:01:14.956017 1081508 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 16:01:14.957254 1081508 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 16:01:14.958554 1081508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 16:01:14.960315 1081508 config.go:182] Loaded profile config "default-k8s-diff-port-912913": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 16:01:14.960412 1081508 config.go:182] Loaded profile config "embed-certs-349782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 16:01:14.960503 1081508 config.go:182] Loaded profile config "no-preload-458006": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 16:01:14.960664 1081508 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 16:01:14.998345 1081508 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 16:01:14.999532 1081508 start.go:297] selected driver: kvm2
	I0127 16:01:14.999554 1081508 start.go:901] validating driver "kvm2" against <nil>
	I0127 16:01:14.999571 1081508 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 16:01:15.000508 1081508 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 16:01:15.000610 1081508 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 16:01:15.018122 1081508 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 16:01:15.018187 1081508 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0127 16:01:15.018275 1081508 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0127 16:01:15.018617 1081508 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 16:01:15.018668 1081508 cni.go:84] Creating CNI manager for ""
	I0127 16:01:15.018751 1081508 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 16:01:15.018765 1081508 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 16:01:15.018842 1081508 start.go:340] cluster config:
	{Name:newest-cni-964010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-964010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 16:01:15.018981 1081508 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 16:01:15.020536 1081508 out.go:177] * Starting "newest-cni-964010" primary control-plane node in "newest-cni-964010" cluster
	I0127 16:01:15.021679 1081508 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 16:01:15.021727 1081508 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 16:01:15.021744 1081508 cache.go:56] Caching tarball of preloaded images
	I0127 16:01:15.021852 1081508 preload.go:172] Found /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 16:01:15.021864 1081508 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 16:01:15.021995 1081508 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/newest-cni-964010/config.json ...
	I0127 16:01:15.022021 1081508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/newest-cni-964010/config.json: {Name:mkc3f132709c7407d3739f6c17e41232d013cb52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 16:01:15.022227 1081508 start.go:360] acquireMachinesLock for newest-cni-964010: {Name:mk884e36253ca066a698970989f20649e5f9cbef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 16:01:15.022290 1081508 start.go:364] duration metric: took 31.284µs to acquireMachinesLock for "newest-cni-964010"
	I0127 16:01:15.022318 1081508 start.go:93] Provisioning new machine with config: &{Name:newest-cni-964010 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-964010
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 16:01:15.022399 1081508 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 16:01:15.024017 1081508 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 16:01:15.024181 1081508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 16:01:15.024229 1081508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 16:01:15.040865 1081508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33055
	I0127 16:01:15.041406 1081508 main.go:141] libmachine: () Calling .GetVersion
	I0127 16:01:15.042128 1081508 main.go:141] libmachine: Using API Version  1
	I0127 16:01:15.042158 1081508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 16:01:15.042498 1081508 main.go:141] libmachine: () Calling .GetMachineName
	I0127 16:01:15.042736 1081508 main.go:141] libmachine: (newest-cni-964010) Calling .GetMachineName
	I0127 16:01:15.042893 1081508 main.go:141] libmachine: (newest-cni-964010) Calling .DriverName
	I0127 16:01:15.043072 1081508 start.go:159] libmachine.API.Create for "newest-cni-964010" (driver="kvm2")
	I0127 16:01:15.043105 1081508 client.go:168] LocalClient.Create starting
	I0127 16:01:15.043229 1081508 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem
	I0127 16:01:15.043276 1081508 main.go:141] libmachine: Decoding PEM data...
	I0127 16:01:15.043293 1081508 main.go:141] libmachine: Parsing certificate...
	I0127 16:01:15.043371 1081508 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem
	I0127 16:01:15.043395 1081508 main.go:141] libmachine: Decoding PEM data...
	I0127 16:01:15.043407 1081508 main.go:141] libmachine: Parsing certificate...
	I0127 16:01:15.043425 1081508 main.go:141] libmachine: Running pre-create checks...
	I0127 16:01:15.043434 1081508 main.go:141] libmachine: (newest-cni-964010) Calling .PreCreateCheck
	I0127 16:01:15.043903 1081508 main.go:141] libmachine: (newest-cni-964010) Calling .GetConfigRaw
	I0127 16:01:15.044439 1081508 main.go:141] libmachine: Creating machine...
	I0127 16:01:15.044454 1081508 main.go:141] libmachine: (newest-cni-964010) Calling .Create
	I0127 16:01:15.044630 1081508 main.go:141] libmachine: (newest-cni-964010) creating KVM machine...
	I0127 16:01:15.044639 1081508 main.go:141] libmachine: (newest-cni-964010) creating network...
	I0127 16:01:15.046108 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | found existing default KVM network
	I0127 16:01:15.047636 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:15.047444 1081531 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:46:43:87} reservation:<nil>}
	I0127 16:01:15.048527 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:15.048431 1081531 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:06:6a:c7} reservation:<nil>}
	I0127 16:01:15.049459 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:15.049392 1081531 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:1d:bd:da} reservation:<nil>}
	I0127 16:01:15.050800 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:15.050708 1081531 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003ea8b0}
	I0127 16:01:15.050884 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | created network xml: 
	I0127 16:01:15.050900 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | <network>
	I0127 16:01:15.050907 1081508 main.go:141] libmachine: (newest-cni-964010) DBG |   <name>mk-newest-cni-964010</name>
	I0127 16:01:15.050917 1081508 main.go:141] libmachine: (newest-cni-964010) DBG |   <dns enable='no'/>
	I0127 16:01:15.050948 1081508 main.go:141] libmachine: (newest-cni-964010) DBG |   
	I0127 16:01:15.050969 1081508 main.go:141] libmachine: (newest-cni-964010) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0127 16:01:15.050979 1081508 main.go:141] libmachine: (newest-cni-964010) DBG |     <dhcp>
	I0127 16:01:15.050998 1081508 main.go:141] libmachine: (newest-cni-964010) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0127 16:01:15.051012 1081508 main.go:141] libmachine: (newest-cni-964010) DBG |     </dhcp>
	I0127 16:01:15.051021 1081508 main.go:141] libmachine: (newest-cni-964010) DBG |   </ip>
	I0127 16:01:15.051030 1081508 main.go:141] libmachine: (newest-cni-964010) DBG |   
	I0127 16:01:15.051046 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | </network>
	I0127 16:01:15.051055 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | 
	I0127 16:01:15.056128 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | trying to create private KVM network mk-newest-cni-964010 192.168.72.0/24...
	I0127 16:01:15.131149 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | private KVM network mk-newest-cni-964010 192.168.72.0/24 created
	I0127 16:01:15.131188 1081508 main.go:141] libmachine: (newest-cni-964010) setting up store path in /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/newest-cni-964010 ...
	I0127 16:01:15.131221 1081508 main.go:141] libmachine: (newest-cni-964010) building disk image from file:///home/jenkins/minikube-integration/20321-1005652/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 16:01:15.131343 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:15.131194 1081531 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 16:01:15.131430 1081508 main.go:141] libmachine: (newest-cni-964010) Downloading /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20321-1005652/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 16:01:15.470537 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:15.470375 1081531 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/newest-cni-964010/id_rsa...
	I0127 16:01:15.715779 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:15.715642 1081531 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/newest-cni-964010/newest-cni-964010.rawdisk...
	I0127 16:01:15.715821 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | Writing magic tar header
	I0127 16:01:15.716010 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | Writing SSH key tar header
	I0127 16:01:15.716170 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:15.716091 1081531 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/newest-cni-964010 ...
	I0127 16:01:15.716212 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/newest-cni-964010
	I0127 16:01:15.716273 1081508 main.go:141] libmachine: (newest-cni-964010) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/newest-cni-964010 (perms=drwx------)
	I0127 16:01:15.716296 1081508 main.go:141] libmachine: (newest-cni-964010) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube/machines (perms=drwxr-xr-x)
	I0127 16:01:15.716311 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines
	I0127 16:01:15.716325 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 16:01:15.716338 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20321-1005652
	I0127 16:01:15.716355 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 16:01:15.716379 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | checking permissions on dir: /home/jenkins
	I0127 16:01:15.716393 1081508 main.go:141] libmachine: (newest-cni-964010) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652/.minikube (perms=drwxr-xr-x)
	I0127 16:01:15.716404 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | checking permissions on dir: /home
	I0127 16:01:15.716416 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | skipping /home - not owner
	I0127 16:01:15.716427 1081508 main.go:141] libmachine: (newest-cni-964010) setting executable bit set on /home/jenkins/minikube-integration/20321-1005652 (perms=drwxrwxr-x)
	I0127 16:01:15.716455 1081508 main.go:141] libmachine: (newest-cni-964010) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 16:01:15.716471 1081508 main.go:141] libmachine: (newest-cni-964010) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 16:01:15.716537 1081508 main.go:141] libmachine: (newest-cni-964010) creating domain...
	I0127 16:01:15.717730 1081508 main.go:141] libmachine: (newest-cni-964010) define libvirt domain using xml: 
	I0127 16:01:15.717740 1081508 main.go:141] libmachine: (newest-cni-964010) <domain type='kvm'>
	I0127 16:01:15.717776 1081508 main.go:141] libmachine: (newest-cni-964010)   <name>newest-cni-964010</name>
	I0127 16:01:15.717798 1081508 main.go:141] libmachine: (newest-cni-964010)   <memory unit='MiB'>2200</memory>
	I0127 16:01:15.717824 1081508 main.go:141] libmachine: (newest-cni-964010)   <vcpu>2</vcpu>
	I0127 16:01:15.717848 1081508 main.go:141] libmachine: (newest-cni-964010)   <features>
	I0127 16:01:15.717861 1081508 main.go:141] libmachine: (newest-cni-964010)     <acpi/>
	I0127 16:01:15.717871 1081508 main.go:141] libmachine: (newest-cni-964010)     <apic/>
	I0127 16:01:15.717880 1081508 main.go:141] libmachine: (newest-cni-964010)     <pae/>
	I0127 16:01:15.717889 1081508 main.go:141] libmachine: (newest-cni-964010)     
	I0127 16:01:15.717907 1081508 main.go:141] libmachine: (newest-cni-964010)   </features>
	I0127 16:01:15.717919 1081508 main.go:141] libmachine: (newest-cni-964010)   <cpu mode='host-passthrough'>
	I0127 16:01:15.717939 1081508 main.go:141] libmachine: (newest-cni-964010)   
	I0127 16:01:15.717955 1081508 main.go:141] libmachine: (newest-cni-964010)   </cpu>
	I0127 16:01:15.717962 1081508 main.go:141] libmachine: (newest-cni-964010)   <os>
	I0127 16:01:15.717967 1081508 main.go:141] libmachine: (newest-cni-964010)     <type>hvm</type>
	I0127 16:01:15.717972 1081508 main.go:141] libmachine: (newest-cni-964010)     <boot dev='cdrom'/>
	I0127 16:01:15.717993 1081508 main.go:141] libmachine: (newest-cni-964010)     <boot dev='hd'/>
	I0127 16:01:15.718001 1081508 main.go:141] libmachine: (newest-cni-964010)     <bootmenu enable='no'/>
	I0127 16:01:15.718005 1081508 main.go:141] libmachine: (newest-cni-964010)   </os>
	I0127 16:01:15.718010 1081508 main.go:141] libmachine: (newest-cni-964010)   <devices>
	I0127 16:01:15.718016 1081508 main.go:141] libmachine: (newest-cni-964010)     <disk type='file' device='cdrom'>
	I0127 16:01:15.718034 1081508 main.go:141] libmachine: (newest-cni-964010)       <source file='/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/newest-cni-964010/boot2docker.iso'/>
	I0127 16:01:15.718047 1081508 main.go:141] libmachine: (newest-cni-964010)       <target dev='hdc' bus='scsi'/>
	I0127 16:01:15.718066 1081508 main.go:141] libmachine: (newest-cni-964010)       <readonly/>
	I0127 16:01:15.718082 1081508 main.go:141] libmachine: (newest-cni-964010)     </disk>
	I0127 16:01:15.718095 1081508 main.go:141] libmachine: (newest-cni-964010)     <disk type='file' device='disk'>
	I0127 16:01:15.718107 1081508 main.go:141] libmachine: (newest-cni-964010)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 16:01:15.718124 1081508 main.go:141] libmachine: (newest-cni-964010)       <source file='/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/newest-cni-964010/newest-cni-964010.rawdisk'/>
	I0127 16:01:15.718133 1081508 main.go:141] libmachine: (newest-cni-964010)       <target dev='hda' bus='virtio'/>
	I0127 16:01:15.718141 1081508 main.go:141] libmachine: (newest-cni-964010)     </disk>
	I0127 16:01:15.718152 1081508 main.go:141] libmachine: (newest-cni-964010)     <interface type='network'>
	I0127 16:01:15.718161 1081508 main.go:141] libmachine: (newest-cni-964010)       <source network='mk-newest-cni-964010'/>
	I0127 16:01:15.718173 1081508 main.go:141] libmachine: (newest-cni-964010)       <model type='virtio'/>
	I0127 16:01:15.718183 1081508 main.go:141] libmachine: (newest-cni-964010)     </interface>
	I0127 16:01:15.718193 1081508 main.go:141] libmachine: (newest-cni-964010)     <interface type='network'>
	I0127 16:01:15.718204 1081508 main.go:141] libmachine: (newest-cni-964010)       <source network='default'/>
	I0127 16:01:15.718224 1081508 main.go:141] libmachine: (newest-cni-964010)       <model type='virtio'/>
	I0127 16:01:15.718245 1081508 main.go:141] libmachine: (newest-cni-964010)     </interface>
	I0127 16:01:15.718262 1081508 main.go:141] libmachine: (newest-cni-964010)     <serial type='pty'>
	I0127 16:01:15.718275 1081508 main.go:141] libmachine: (newest-cni-964010)       <target port='0'/>
	I0127 16:01:15.718285 1081508 main.go:141] libmachine: (newest-cni-964010)     </serial>
	I0127 16:01:15.718294 1081508 main.go:141] libmachine: (newest-cni-964010)     <console type='pty'>
	I0127 16:01:15.718304 1081508 main.go:141] libmachine: (newest-cni-964010)       <target type='serial' port='0'/>
	I0127 16:01:15.718314 1081508 main.go:141] libmachine: (newest-cni-964010)     </console>
	I0127 16:01:15.718324 1081508 main.go:141] libmachine: (newest-cni-964010)     <rng model='virtio'>
	I0127 16:01:15.718341 1081508 main.go:141] libmachine: (newest-cni-964010)       <backend model='random'>/dev/random</backend>
	I0127 16:01:15.718356 1081508 main.go:141] libmachine: (newest-cni-964010)     </rng>
	I0127 16:01:15.718367 1081508 main.go:141] libmachine: (newest-cni-964010)     
	I0127 16:01:15.718375 1081508 main.go:141] libmachine: (newest-cni-964010)     
	I0127 16:01:15.718385 1081508 main.go:141] libmachine: (newest-cni-964010)   </devices>
	I0127 16:01:15.718393 1081508 main.go:141] libmachine: (newest-cni-964010) </domain>
	I0127 16:01:15.718400 1081508 main.go:141] libmachine: (newest-cni-964010) 
	I0127 16:01:15.722738 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:63:dd:4d in network default
	I0127 16:01:15.723397 1081508 main.go:141] libmachine: (newest-cni-964010) starting domain...
	I0127 16:01:15.723410 1081508 main.go:141] libmachine: (newest-cni-964010) ensuring networks are active...
	I0127 16:01:15.723418 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:01:15.724087 1081508 main.go:141] libmachine: (newest-cni-964010) Ensuring network default is active
	I0127 16:01:15.724383 1081508 main.go:141] libmachine: (newest-cni-964010) Ensuring network mk-newest-cni-964010 is active
	I0127 16:01:15.724813 1081508 main.go:141] libmachine: (newest-cni-964010) getting domain XML...
	I0127 16:01:15.725505 1081508 main.go:141] libmachine: (newest-cni-964010) creating domain...
	I0127 16:01:17.055241 1081508 main.go:141] libmachine: (newest-cni-964010) waiting for IP...
	I0127 16:01:17.056201 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:01:17.056736 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:01:17.056852 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:17.056765 1081531 retry.go:31] will retry after 191.693409ms: waiting for domain to come up
	I0127 16:01:17.250482 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:01:17.250986 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:01:17.251015 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:17.250948 1081531 retry.go:31] will retry after 335.269173ms: waiting for domain to come up
	I0127 16:01:17.588216 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:01:17.588913 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:01:17.588964 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:17.588853 1081531 retry.go:31] will retry after 312.613709ms: waiting for domain to come up
	I0127 16:01:17.903311 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:01:17.904003 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:01:17.904045 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:17.903947 1081531 retry.go:31] will retry after 386.167611ms: waiting for domain to come up
	I0127 16:01:18.291522 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:01:18.292097 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:01:18.292176 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:18.292085 1081531 retry.go:31] will retry after 487.64767ms: waiting for domain to come up
	I0127 16:01:18.782000 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:01:18.782661 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:01:18.782683 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:18.782631 1081531 retry.go:31] will retry after 591.558541ms: waiting for domain to come up
	I0127 16:01:19.375400 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:01:19.375947 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:01:19.375975 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:19.375902 1081531 retry.go:31] will retry after 1.153747824s: waiting for domain to come up
	I0127 16:01:20.531949 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:01:20.532563 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:01:20.532596 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:20.532525 1081531 retry.go:31] will retry after 1.068481671s: waiting for domain to come up
	I0127 16:01:21.602482 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:01:21.602996 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:01:21.603026 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:21.602965 1081531 retry.go:31] will retry after 1.572208007s: waiting for domain to come up
	I0127 16:01:23.176418 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:01:23.177047 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:01:23.177103 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:23.177043 1081531 retry.go:31] will retry after 1.460641232s: waiting for domain to come up
	I0127 16:01:24.639783 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:01:24.640334 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:01:24.640363 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:24.640283 1081531 retry.go:31] will retry after 2.655121607s: waiting for domain to come up
	I0127 16:01:27.298554 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:01:27.298979 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:01:27.299043 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:27.298939 1081531 retry.go:31] will retry after 2.848574734s: waiting for domain to come up
	I0127 16:01:30.149383 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:01:30.149870 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:01:30.149897 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:30.149823 1081531 retry.go:31] will retry after 3.61860341s: waiting for domain to come up
	I0127 16:01:33.770893 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:01:33.771474 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:01:33.771506 1081508 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:01:33.771426 1081531 retry.go:31] will retry after 4.20362625s: waiting for domain to come up
	
	
	==> CRI-O <==
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.711045538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993699711024954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f26a44d9-b759-449d-9dc7-9d8fbb641892 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.711518725Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1af16aaa-4cb8-4378-bb00-0f3f19ae64bf name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.711585708Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1af16aaa-4cb8-4378-bb00-0f3f19ae64bf name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.711914383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31e0003fc44f05a55ffbe600876a63c9cb90b4e646812d2cddf13fb04d2dcd71,PodSandboxId:3840df892d1ef5ac8fd8166763bbbcd550d51f5ac11622fd4d720e8e93970d45,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737993417167229570,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-s2dxk,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: bbbc5f2c-9a78-48be-ad3a-0bdc09270825,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c80f01ef1298adca735949add29c5d1eeabe85faea5e7fb53a1cb314e0500,PodSandboxId:52f48376c2cc47fb02990d5bfe611298921418639f1b5b23ddecfe13b117d505,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737992443320994076,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-98qj9,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: e7d1a660-247f-4347-95f6-ef8b9df40464,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab946f46c72ff14eb4aed83d3435c8a39193d6f34977462954fb407abfef323b,PodSandboxId:ce0cce9623318dc95841ee6ec15554b65c025b13e3a69802354dd9fa8236a564,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992435940093482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sp7p4,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fbb8eca-e2e6-4760-a0b6-8c6387fe9960,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da65347d79a68856c32e40b44143e0ccec259b5c746d7b369ff2f6926c5a8da2,PodSandboxId:1d175d3f66dfe7493716d08ab895e489a578145ca302e3f149380bc52e1e820a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13
f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992435832850212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xgx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cc3887-d694-4b39-9ad1-c03fcf97b608,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:958ff8a6384394f866e06a9bf562174df07861e39e3ccea5eb289dd04002ca7a,PodSandboxId:5d85dc7b667e1ca0452689f2e4eec3603b3212c168497ac7c584bbf1262fe612,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&I
mageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737992434976271429,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6j6r5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca06a87-654b-42c2-ac04-12d9b0472973,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc79d4a492235ef0d2dca0fc6903797e24a96ed751e3da9ededda483cd92521,PodSandboxId:13a0535867c91c513ef2f72a53dd5a73f5a1d2d5ae9cb046dd36e5c7fe8a5b67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d6
28db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737992434938577682,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e874460-b5bf-4ce6-b1ca-9c188b1fd4e6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af835f5bd5e4898ed04e821ca01d4d2103a209b0756f5b542a451eb366d77ca4,PodSandboxId:4aa549d52d634653bad791d3bc54987a66e9d385b3e114b8c9665d8d5a63c3f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da0
55b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737992422837672760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f433ab7474e627ebd8b0ebe368bba65f,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c0dca9ab68064e12a14081f5876733c3ecfde6a54a898916ae9b19bc87e01e3,PodSandboxId:ed876b22ee4c5c5b9e3df84b8772204ff2cc6ee3fc5c96c1744e0ae00e5dcf24,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb86
2d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737992422876088414,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e81db6a7b1f6cd6d438ff170f9ba0b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c3fc310573e4176eb2efdaee0b6ed19616e91246acae148e8fee84e7e74b3a,PodSandboxId:97c90adac57598f8ced49e3cf3157d7a493de3a11a25e39739ae6422e8353ee3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1f
aa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737992422858082479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27699c047c130a3f15934eacb319302,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f3ef1276cb1a247728daaa2d3615714c72d76a011154f8358a03f5d11cbb339,PodSandboxId:08775aa4e64b136e898d3e2e6f308b1ddf4e90757818f3d200140387c208219f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737992422792512896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b110fb7ce02f420357cd76146d6a1f6a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29ee05f08e95f84420806e2ab52cdc449c56d9276ac207a6383b248caf6b466d,PodSandboxId:876a26e9e4232c8650a62c5cf5974028f50028af2e703431a538b767fe7beb39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737992133357879947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b110fb7ce02f420357cd76146d6a1f6a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1af16aaa-4cb8-4378-bb00-0f3f19ae64bf name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.753400183Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=93a5344e-5a20-4370-96e3-bf56ce18a6f3 name=/runtime.v1.RuntimeService/Version
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.753577227Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=93a5344e-5a20-4370-96e3-bf56ce18a6f3 name=/runtime.v1.RuntimeService/Version
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.754896429Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c2bc2caa-349f-4bb2-952b-7ae120ce9265 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.755295079Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993699755253537,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c2bc2caa-349f-4bb2-952b-7ae120ce9265 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.755764084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6012fa9d-bf0f-4a5a-bb56-23961e577723 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.755858274Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6012fa9d-bf0f-4a5a-bb56-23961e577723 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.756210903Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31e0003fc44f05a55ffbe600876a63c9cb90b4e646812d2cddf13fb04d2dcd71,PodSandboxId:3840df892d1ef5ac8fd8166763bbbcd550d51f5ac11622fd4d720e8e93970d45,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737993417167229570,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-s2dxk,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: bbbc5f2c-9a78-48be-ad3a-0bdc09270825,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c80f01ef1298adca735949add29c5d1eeabe85faea5e7fb53a1cb314e0500,PodSandboxId:52f48376c2cc47fb02990d5bfe611298921418639f1b5b23ddecfe13b117d505,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737992443320994076,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-98qj9,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: e7d1a660-247f-4347-95f6-ef8b9df40464,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab946f46c72ff14eb4aed83d3435c8a39193d6f34977462954fb407abfef323b,PodSandboxId:ce0cce9623318dc95841ee6ec15554b65c025b13e3a69802354dd9fa8236a564,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992435940093482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sp7p4,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fbb8eca-e2e6-4760-a0b6-8c6387fe9960,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da65347d79a68856c32e40b44143e0ccec259b5c746d7b369ff2f6926c5a8da2,PodSandboxId:1d175d3f66dfe7493716d08ab895e489a578145ca302e3f149380bc52e1e820a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13
f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992435832850212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xgx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cc3887-d694-4b39-9ad1-c03fcf97b608,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:958ff8a6384394f866e06a9bf562174df07861e39e3ccea5eb289dd04002ca7a,PodSandboxId:5d85dc7b667e1ca0452689f2e4eec3603b3212c168497ac7c584bbf1262fe612,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&I
mageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737992434976271429,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6j6r5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca06a87-654b-42c2-ac04-12d9b0472973,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc79d4a492235ef0d2dca0fc6903797e24a96ed751e3da9ededda483cd92521,PodSandboxId:13a0535867c91c513ef2f72a53dd5a73f5a1d2d5ae9cb046dd36e5c7fe8a5b67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d6
28db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737992434938577682,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e874460-b5bf-4ce6-b1ca-9c188b1fd4e6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af835f5bd5e4898ed04e821ca01d4d2103a209b0756f5b542a451eb366d77ca4,PodSandboxId:4aa549d52d634653bad791d3bc54987a66e9d385b3e114b8c9665d8d5a63c3f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da0
55b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737992422837672760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f433ab7474e627ebd8b0ebe368bba65f,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c0dca9ab68064e12a14081f5876733c3ecfde6a54a898916ae9b19bc87e01e3,PodSandboxId:ed876b22ee4c5c5b9e3df84b8772204ff2cc6ee3fc5c96c1744e0ae00e5dcf24,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb86
2d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737992422876088414,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e81db6a7b1f6cd6d438ff170f9ba0b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c3fc310573e4176eb2efdaee0b6ed19616e91246acae148e8fee84e7e74b3a,PodSandboxId:97c90adac57598f8ced49e3cf3157d7a493de3a11a25e39739ae6422e8353ee3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1f
aa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737992422858082479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27699c047c130a3f15934eacb319302,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f3ef1276cb1a247728daaa2d3615714c72d76a011154f8358a03f5d11cbb339,PodSandboxId:08775aa4e64b136e898d3e2e6f308b1ddf4e90757818f3d200140387c208219f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737992422792512896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b110fb7ce02f420357cd76146d6a1f6a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29ee05f08e95f84420806e2ab52cdc449c56d9276ac207a6383b248caf6b466d,PodSandboxId:876a26e9e4232c8650a62c5cf5974028f50028af2e703431a538b767fe7beb39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737992133357879947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b110fb7ce02f420357cd76146d6a1f6a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6012fa9d-bf0f-4a5a-bb56-23961e577723 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.795853821Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=71648eb2-a372-4c36-8416-c6ebe3806e15 name=/runtime.v1.RuntimeService/Version
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.795972746Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=71648eb2-a372-4c36-8416-c6ebe3806e15 name=/runtime.v1.RuntimeService/Version
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.797664889Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f195ad40-59aa-488f-ab32-c8e30b9ffb82 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.798193930Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993699798170237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f195ad40-59aa-488f-ab32-c8e30b9ffb82 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.798981274Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5555b7e9-1040-4837-9c88-29a3ca985c39 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.799065340Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5555b7e9-1040-4837-9c88-29a3ca985c39 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.799380019Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31e0003fc44f05a55ffbe600876a63c9cb90b4e646812d2cddf13fb04d2dcd71,PodSandboxId:3840df892d1ef5ac8fd8166763bbbcd550d51f5ac11622fd4d720e8e93970d45,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737993417167229570,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-s2dxk,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: bbbc5f2c-9a78-48be-ad3a-0bdc09270825,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c80f01ef1298adca735949add29c5d1eeabe85faea5e7fb53a1cb314e0500,PodSandboxId:52f48376c2cc47fb02990d5bfe611298921418639f1b5b23ddecfe13b117d505,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737992443320994076,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-98qj9,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: e7d1a660-247f-4347-95f6-ef8b9df40464,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab946f46c72ff14eb4aed83d3435c8a39193d6f34977462954fb407abfef323b,PodSandboxId:ce0cce9623318dc95841ee6ec15554b65c025b13e3a69802354dd9fa8236a564,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992435940093482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sp7p4,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fbb8eca-e2e6-4760-a0b6-8c6387fe9960,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da65347d79a68856c32e40b44143e0ccec259b5c746d7b369ff2f6926c5a8da2,PodSandboxId:1d175d3f66dfe7493716d08ab895e489a578145ca302e3f149380bc52e1e820a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13
f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992435832850212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xgx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cc3887-d694-4b39-9ad1-c03fcf97b608,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:958ff8a6384394f866e06a9bf562174df07861e39e3ccea5eb289dd04002ca7a,PodSandboxId:5d85dc7b667e1ca0452689f2e4eec3603b3212c168497ac7c584bbf1262fe612,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&I
mageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737992434976271429,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6j6r5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca06a87-654b-42c2-ac04-12d9b0472973,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc79d4a492235ef0d2dca0fc6903797e24a96ed751e3da9ededda483cd92521,PodSandboxId:13a0535867c91c513ef2f72a53dd5a73f5a1d2d5ae9cb046dd36e5c7fe8a5b67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d6
28db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737992434938577682,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e874460-b5bf-4ce6-b1ca-9c188b1fd4e6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af835f5bd5e4898ed04e821ca01d4d2103a209b0756f5b542a451eb366d77ca4,PodSandboxId:4aa549d52d634653bad791d3bc54987a66e9d385b3e114b8c9665d8d5a63c3f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da0
55b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737992422837672760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f433ab7474e627ebd8b0ebe368bba65f,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c0dca9ab68064e12a14081f5876733c3ecfde6a54a898916ae9b19bc87e01e3,PodSandboxId:ed876b22ee4c5c5b9e3df84b8772204ff2cc6ee3fc5c96c1744e0ae00e5dcf24,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb86
2d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737992422876088414,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e81db6a7b1f6cd6d438ff170f9ba0b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c3fc310573e4176eb2efdaee0b6ed19616e91246acae148e8fee84e7e74b3a,PodSandboxId:97c90adac57598f8ced49e3cf3157d7a493de3a11a25e39739ae6422e8353ee3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1f
aa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737992422858082479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27699c047c130a3f15934eacb319302,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f3ef1276cb1a247728daaa2d3615714c72d76a011154f8358a03f5d11cbb339,PodSandboxId:08775aa4e64b136e898d3e2e6f308b1ddf4e90757818f3d200140387c208219f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737992422792512896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b110fb7ce02f420357cd76146d6a1f6a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29ee05f08e95f84420806e2ab52cdc449c56d9276ac207a6383b248caf6b466d,PodSandboxId:876a26e9e4232c8650a62c5cf5974028f50028af2e703431a538b767fe7beb39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737992133357879947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b110fb7ce02f420357cd76146d6a1f6a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5555b7e9-1040-4837-9c88-29a3ca985c39 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.851661051Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f156f58-9f7b-4a4e-83c0-305437e88854 name=/runtime.v1.RuntimeService/Version
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.851763765Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f156f58-9f7b-4a4e-83c0-305437e88854 name=/runtime.v1.RuntimeService/Version
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.853690727Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=804272c1-49ae-49e2-97f7-564c8bbad215 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.854247324Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993699854215004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=804272c1-49ae-49e2-97f7-564c8bbad215 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.854927692Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8626ca46-87cf-4194-8543-f8b5d9d00a57 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.855030027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8626ca46-87cf-4194-8543-f8b5d9d00a57 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:39 no-preload-458006 crio[728]: time="2025-01-27 16:01:39.855370172Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:31e0003fc44f05a55ffbe600876a63c9cb90b4e646812d2cddf13fb04d2dcd71,PodSandboxId:3840df892d1ef5ac8fd8166763bbbcd550d51f5ac11622fd4d720e8e93970d45,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737993417167229570,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-s2dxk,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: bbbc5f2c-9a78-48be-ad3a-0bdc09270825,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549c80f01ef1298adca735949add29c5d1eeabe85faea5e7fb53a1cb314e0500,PodSandboxId:52f48376c2cc47fb02990d5bfe611298921418639f1b5b23ddecfe13b117d505,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737992443320994076,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-98qj9,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: e7d1a660-247f-4347-95f6-ef8b9df40464,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab946f46c72ff14eb4aed83d3435c8a39193d6f34977462954fb407abfef323b,PodSandboxId:ce0cce9623318dc95841ee6ec15554b65c025b13e3a69802354dd9fa8236a564,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992435940093482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-sp7p4,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fbb8eca-e2e6-4760-a0b6-8c6387fe9960,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da65347d79a68856c32e40b44143e0ccec259b5c746d7b369ff2f6926c5a8da2,PodSandboxId:1d175d3f66dfe7493716d08ab895e489a578145ca302e3f149380bc52e1e820a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13
f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992435832850212,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xgx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cc3887-d694-4b39-9ad1-c03fcf97b608,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:958ff8a6384394f866e06a9bf562174df07861e39e3ccea5eb289dd04002ca7a,PodSandboxId:5d85dc7b667e1ca0452689f2e4eec3603b3212c168497ac7c584bbf1262fe612,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&I
mageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737992434976271429,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6j6r5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca06a87-654b-42c2-ac04-12d9b0472973,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acc79d4a492235ef0d2dca0fc6903797e24a96ed751e3da9ededda483cd92521,PodSandboxId:13a0535867c91c513ef2f72a53dd5a73f5a1d2d5ae9cb046dd36e5c7fe8a5b67,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d6
28db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737992434938577682,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e874460-b5bf-4ce6-b1ca-9c188b1fd4e6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af835f5bd5e4898ed04e821ca01d4d2103a209b0756f5b542a451eb366d77ca4,PodSandboxId:4aa549d52d634653bad791d3bc54987a66e9d385b3e114b8c9665d8d5a63c3f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da0
55b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737992422837672760,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f433ab7474e627ebd8b0ebe368bba65f,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c0dca9ab68064e12a14081f5876733c3ecfde6a54a898916ae9b19bc87e01e3,PodSandboxId:ed876b22ee4c5c5b9e3df84b8772204ff2cc6ee3fc5c96c1744e0ae00e5dcf24,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb86
2d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737992422876088414,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e81db6a7b1f6cd6d438ff170f9ba0b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c3fc310573e4176eb2efdaee0b6ed19616e91246acae148e8fee84e7e74b3a,PodSandboxId:97c90adac57598f8ced49e3cf3157d7a493de3a11a25e39739ae6422e8353ee3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1f
aa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737992422858082479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b27699c047c130a3f15934eacb319302,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f3ef1276cb1a247728daaa2d3615714c72d76a011154f8358a03f5d11cbb339,PodSandboxId:08775aa4e64b136e898d3e2e6f308b1ddf4e90757818f3d200140387c208219f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737992422792512896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b110fb7ce02f420357cd76146d6a1f6a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29ee05f08e95f84420806e2ab52cdc449c56d9276ac207a6383b248caf6b466d,PodSandboxId:876a26e9e4232c8650a62c5cf5974028f50028af2e703431a538b767fe7beb39,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737992133357879947,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-458006,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b110fb7ce02f420357cd76146d6a1f6a,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8626ca46-87cf-4194-8543-f8b5d9d00a57 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	31e0003fc44f0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 minutes ago       Exited              dashboard-metrics-scraper   8                   3840df892d1ef       dashboard-metrics-scraper-86c6bf9756-s2dxk
	549c80f01ef12       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   20 minutes ago      Running             kubernetes-dashboard        0                   52f48376c2cc4       kubernetes-dashboard-7779f9b69b-98qj9
	ab946f46c72ff       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   ce0cce9623318       coredns-668d6bf9bc-sp7p4
	da65347d79a68       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   1d175d3f66dfe       coredns-668d6bf9bc-xgx78
	958ff8a638439       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                           21 minutes ago      Running             kube-proxy                  0                   5d85dc7b667e1       kube-proxy-6j6r5
	acc79d4a49223       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 minutes ago      Running             storage-provisioner         0                   13a0535867c91       storage-provisioner
	0c0dca9ab6806       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           21 minutes ago      Running             etcd                        2                   ed876b22ee4c5       etcd-no-preload-458006
	12c3fc310573e       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                           21 minutes ago      Running             kube-scheduler              2                   97c90adac5759       kube-scheduler-no-preload-458006
	af835f5bd5e48       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                           21 minutes ago      Running             kube-controller-manager     2                   4aa549d52d634       kube-controller-manager-no-preload-458006
	3f3ef1276cb1a       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           21 minutes ago      Running             kube-apiserver              2                   08775aa4e64b1       kube-apiserver-no-preload-458006
	29ee05f08e95f       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           26 minutes ago      Exited              kube-apiserver              1                   876a26e9e4232       kube-apiserver-no-preload-458006
	
	
	==> coredns [ab946f46c72ff14eb4aed83d3435c8a39193d6f34977462954fb407abfef323b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [da65347d79a68856c32e40b44143e0ccec259b5c746d7b369ff2f6926c5a8da2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-458006
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-458006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743
	                    minikube.k8s.io/name=no-preload-458006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T15_40_28_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 15:40:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-458006
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 16:01:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 15:59:51 +0000   Mon, 27 Jan 2025 15:40:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 15:59:51 +0000   Mon, 27 Jan 2025 15:40:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 15:59:51 +0000   Mon, 27 Jan 2025 15:40:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 15:59:51 +0000   Mon, 27 Jan 2025 15:40:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.30
	  Hostname:    no-preload-458006
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 182547c583964238bd4d70895081c6ec
	  System UUID:                182547c5-8396-4238-bd4d-70895081c6ec
	  Boot ID:                    17bff89c-442e-4662-85a6-3a8fe372dff6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-sp7p4                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-xgx78                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-no-preload-458006                        100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-no-preload-458006              250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-no-preload-458006     200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-6j6r5                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-no-preload-458006              100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-k7879                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-s2dxk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-98qj9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21m   kube-proxy       
	  Normal  Starting                 21m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m   kubelet          Node no-preload-458006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m   kubelet          Node no-preload-458006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m   kubelet          Node no-preload-458006 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m   node-controller  Node no-preload-458006 event: Registered Node no-preload-458006 in Controller
	
	
	==> dmesg <==
	[Jan27 15:35] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.943748] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.639659] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.557868] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.057111] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073829] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.184233] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.153948] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[  +0.310311] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[ +16.337687] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.064626] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.937552] systemd-fstab-generator[1445]: Ignoring "noauto" option for root device
	[  +5.730773] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.822452] kauditd_printk_skb: 89 callbacks suppressed
	[Jan27 15:40] systemd-fstab-generator[3265]: Ignoring "noauto" option for root device
	[  +0.083943] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.997637] systemd-fstab-generator[3606]: Ignoring "noauto" option for root device
	[  +0.091269] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.382837] systemd-fstab-generator[3721]: Ignoring "noauto" option for root device
	[  +0.097395] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.532064] kauditd_printk_skb: 108 callbacks suppressed
	[  +6.184337] kauditd_printk_skb: 5 callbacks suppressed
	[Jan27 15:41] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [0c0dca9ab68064e12a14081f5876733c3ecfde6a54a898916ae9b19bc87e01e3] <==
	{"level":"info","ts":"2025-01-27T15:40:23.496776Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4c46e38203538bcd","local-member-id":"21545a69824e3d79","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T15:40:23.509981Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T15:40:23.510025Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T15:40:23.510636Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T15:40:41.819778Z","caller":"traceutil/trace.go:171","msg":"trace[1740434304] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"124.595026ms","start":"2025-01-27T15:40:41.695157Z","end":"2025-01-27T15:40:41.819752Z","steps":["trace[1740434304] 'process raft request'  (duration: 124.487288ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T15:40:42.275904Z","caller":"traceutil/trace.go:171","msg":"trace[628305815] transaction","detail":"{read_only:false; response_revision:495; number_of_response:1; }","duration":"118.13451ms","start":"2025-01-27T15:40:42.157690Z","end":"2025-01-27T15:40:42.275824Z","steps":["trace[628305815] 'process raft request'  (duration: 117.946ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T15:40:46.101066Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.637388ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T15:40:46.102004Z","caller":"traceutil/trace.go:171","msg":"trace[1095915929] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:520; }","duration":"106.558985ms","start":"2025-01-27T15:40:45.995422Z","end":"2025-01-27T15:40:46.101981Z","steps":["trace[1095915929] 'range keys from in-memory index tree'  (duration: 104.247378ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T15:40:59.512433Z","caller":"traceutil/trace.go:171","msg":"trace[284223849] transaction","detail":"{read_only:false; response_revision:550; number_of_response:1; }","duration":"262.86557ms","start":"2025-01-27T15:40:59.249548Z","end":"2025-01-27T15:40:59.512413Z","steps":["trace[284223849] 'process raft request'  (duration: 262.029265ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T15:40:59.513322Z","caller":"traceutil/trace.go:171","msg":"trace[1839188026] linearizableReadLoop","detail":"{readStateIndex:565; appliedIndex:564; }","duration":"117.388626ms","start":"2025-01-27T15:40:59.394387Z","end":"2025-01-27T15:40:59.511775Z","steps":["trace[1839188026] 'read index received'  (duration: 116.982047ms)","trace[1839188026] 'applied index is now lower than readState.Index'  (duration: 405.67µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T15:40:59.513566Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.141111ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T15:40:59.513634Z","caller":"traceutil/trace.go:171","msg":"trace[315806921] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:550; }","duration":"119.262276ms","start":"2025-01-27T15:40:59.394356Z","end":"2025-01-27T15:40:59.513618Z","steps":["trace[315806921] 'agreement among raft nodes before linearized reading'  (duration: 119.076298ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T15:50:23.552783Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":852}
	{"level":"info","ts":"2025-01-27T15:50:23.590405Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":852,"took":"37.080637ms","hash":2039402061,"current-db-size-bytes":2793472,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2793472,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2025-01-27T15:50:23.590583Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2039402061,"revision":852,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T15:55:23.560678Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1112}
	{"level":"info","ts":"2025-01-27T15:55:23.566555Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1112,"took":"5.336596ms","hash":250220924,"current-db-size-bytes":2793472,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1703936,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-01-27T15:55:23.566628Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":250220924,"revision":1112,"compact-revision":852}
	{"level":"info","ts":"2025-01-27T16:00:23.570763Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1363}
	{"level":"info","ts":"2025-01-27T16:00:23.578295Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1363,"took":"6.072999ms","hash":804747820,"current-db-size-bytes":2793472,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1691648,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-01-27T16:00:23.578946Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":804747820,"revision":1363,"compact-revision":1112}
	{"level":"info","ts":"2025-01-27T16:01:30.621396Z","caller":"traceutil/trace.go:171","msg":"trace[298425535] linearizableReadLoop","detail":"{readStateIndex:1945; appliedIndex:1944; }","duration":"104.263635ms","start":"2025-01-27T16:01:30.517088Z","end":"2025-01-27T16:01:30.621351Z","steps":["trace[298425535] 'read index received'  (duration: 104.114105ms)","trace[298425535] 'applied index is now lower than readState.Index'  (duration: 149.128µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T16:01:30.621697Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.518421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T16:01:30.621735Z","caller":"traceutil/trace.go:171","msg":"trace[1165632438] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1670; }","duration":"104.662614ms","start":"2025-01-27T16:01:30.517063Z","end":"2025-01-27T16:01:30.621726Z","steps":["trace[1165632438] 'agreement among raft nodes before linearized reading'  (duration: 104.500186ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T16:01:30.621911Z","caller":"traceutil/trace.go:171","msg":"trace[562291355] transaction","detail":"{read_only:false; response_revision:1670; number_of_response:1; }","duration":"180.569695ms","start":"2025-01-27T16:01:30.441335Z","end":"2025-01-27T16:01:30.621904Z","steps":["trace[562291355] 'process raft request'  (duration: 179.917009ms)"],"step_count":1}
	
	
	==> kernel <==
	 16:01:40 up 26 min,  0 users,  load average: 0.43, 0.40, 0.33
	Linux no-preload-458006 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [29ee05f08e95f84420806e2ab52cdc449c56d9276ac207a6383b248caf6b466d] <==
	W0127 15:40:19.239698       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.248765       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.288843       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.306864       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.309355       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.348724       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.357688       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.374025       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.458377       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.461959       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.483996       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.500183       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.577146       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.581781       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.623070       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.687819       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.691303       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.731705       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.736228       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.831016       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.872858       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:19.911843       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:20.044223       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:20.050952       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:20.067407       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [3f3ef1276cb1a247728daaa2d3615714c72d76a011154f8358a03f5d11cbb339] <==
	 > logger="UnhandledError"
	I0127 15:58:26.505071       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 16:00:25.499286       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 16:00:25.499765       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 16:00:26.501819       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 16:00:26.501896       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 16:00:26.501942       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 16:00:26.502027       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 16:00:26.503183       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 16:00:26.503231       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 16:01:26.504222       1 handler_proxy.go:99] no RequestInfo found in the context
	W0127 16:01:26.504610       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 16:01:26.504681       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0127 16:01:26.504715       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 16:01:26.505856       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 16:01:26.505946       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [af835f5bd5e4898ed04e821ca01d4d2103a209b0756f5b542a451eb366d77ca4] <==
	I0127 15:56:57.845319       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="100.43µs"
	I0127 15:56:59.167507       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="73.107µs"
	E0127 15:57:02.329985       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 15:57:02.414623       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 15:57:05.451879       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="53.87µs"
	I0127 15:57:11.162911       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="54.598µs"
	E0127 15:57:32.339805       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 15:57:32.422960       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 15:58:02.346301       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 15:58:02.431704       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 15:58:32.353348       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 15:58:32.439936       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 15:59:02.362029       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 15:59:02.450072       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 15:59:32.368858       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 15:59:32.458408       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 15:59:51.550744       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-458006"
	E0127 16:00:02.377681       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 16:00:02.466975       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 16:00:32.384538       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 16:00:32.475344       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 16:01:02.392761       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 16:01:02.483221       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 16:01:32.400701       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 16:01:32.492133       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [958ff8a6384394f866e06a9bf562174df07861e39e3ccea5eb289dd04002ca7a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 15:40:35.553254       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 15:40:35.663637       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.30"]
	E0127 15:40:35.663879       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 15:40:35.946359       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 15:40:35.946407       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 15:40:35.946431       1 server_linux.go:170] "Using iptables Proxier"
	I0127 15:40:36.022542       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 15:40:36.024929       1 server.go:497] "Version info" version="v1.32.1"
	I0127 15:40:36.024966       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 15:40:36.027436       1 config.go:199] "Starting service config controller"
	I0127 15:40:36.027521       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 15:40:36.027551       1 config.go:105] "Starting endpoint slice config controller"
	I0127 15:40:36.027555       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 15:40:36.028067       1 config.go:329] "Starting node config controller"
	I0127 15:40:36.028100       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 15:40:36.132580       1 shared_informer.go:320] Caches are synced for service config
	I0127 15:40:36.134534       1 shared_informer.go:320] Caches are synced for node config
	I0127 15:40:36.135420       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [12c3fc310573e4176eb2efdaee0b6ed19616e91246acae148e8fee84e7e74b3a] <==
	W0127 15:40:26.384312       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 15:40:26.384547       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:26.418531       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 15:40:26.418625       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:26.419716       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 15:40:26.419832       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:26.431826       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 15:40:26.431932       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:26.462364       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 15:40:26.462608       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:26.474519       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 15:40:26.474608       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:26.612324       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 15:40:26.612492       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:26.658338       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 15:40:26.658514       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:26.735182       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 15:40:26.735279       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:26.764117       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 15:40:26.764235       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 15:40:26.770863       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 15:40:26.770911       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:26.814764       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 15:40:26.814836       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 15:40:29.522685       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 16:00:58 no-preload-458006 kubelet[3613]: E0127 16:00:58.585516    3613 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993658585138911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:00:58 no-preload-458006 kubelet[3613]: E0127 16:00:58.585593    3613 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993658585138911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:00:59 no-preload-458006 kubelet[3613]: E0127 16:00:59.145016    3613 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-k7879" podUID="137f45e8-cf1d-404b-af06-4b99a257450f"
	Jan 27 16:01:08 no-preload-458006 kubelet[3613]: E0127 16:01:08.587384    3613 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993668586895111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:01:08 no-preload-458006 kubelet[3613]: E0127 16:01:08.587824    3613 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993668586895111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:01:12 no-preload-458006 kubelet[3613]: E0127 16:01:12.150797    3613 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-k7879" podUID="137f45e8-cf1d-404b-af06-4b99a257450f"
	Jan 27 16:01:13 no-preload-458006 kubelet[3613]: I0127 16:01:13.143965    3613 scope.go:117] "RemoveContainer" containerID="31e0003fc44f05a55ffbe600876a63c9cb90b4e646812d2cddf13fb04d2dcd71"
	Jan 27 16:01:13 no-preload-458006 kubelet[3613]: E0127 16:01:13.144192    3613 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-s2dxk_kubernetes-dashboard(bbbc5f2c-9a78-48be-ad3a-0bdc09270825)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-s2dxk" podUID="bbbc5f2c-9a78-48be-ad3a-0bdc09270825"
	Jan 27 16:01:18 no-preload-458006 kubelet[3613]: E0127 16:01:18.590118    3613 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993678589739582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:01:18 no-preload-458006 kubelet[3613]: E0127 16:01:18.590164    3613 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993678589739582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:01:24 no-preload-458006 kubelet[3613]: I0127 16:01:24.143748    3613 scope.go:117] "RemoveContainer" containerID="31e0003fc44f05a55ffbe600876a63c9cb90b4e646812d2cddf13fb04d2dcd71"
	Jan 27 16:01:24 no-preload-458006 kubelet[3613]: E0127 16:01:24.144090    3613 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-s2dxk_kubernetes-dashboard(bbbc5f2c-9a78-48be-ad3a-0bdc09270825)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-s2dxk" podUID="bbbc5f2c-9a78-48be-ad3a-0bdc09270825"
	Jan 27 16:01:27 no-preload-458006 kubelet[3613]: E0127 16:01:27.145144    3613 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-k7879" podUID="137f45e8-cf1d-404b-af06-4b99a257450f"
	Jan 27 16:01:28 no-preload-458006 kubelet[3613]: E0127 16:01:28.172503    3613 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 16:01:28 no-preload-458006 kubelet[3613]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 16:01:28 no-preload-458006 kubelet[3613]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 16:01:28 no-preload-458006 kubelet[3613]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 16:01:28 no-preload-458006 kubelet[3613]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 16:01:28 no-preload-458006 kubelet[3613]: E0127 16:01:28.592218    3613 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993688591727740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:01:28 no-preload-458006 kubelet[3613]: E0127 16:01:28.592501    3613 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993688591727740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:01:37 no-preload-458006 kubelet[3613]: I0127 16:01:37.144320    3613 scope.go:117] "RemoveContainer" containerID="31e0003fc44f05a55ffbe600876a63c9cb90b4e646812d2cddf13fb04d2dcd71"
	Jan 27 16:01:37 no-preload-458006 kubelet[3613]: E0127 16:01:37.144931    3613 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-s2dxk_kubernetes-dashboard(bbbc5f2c-9a78-48be-ad3a-0bdc09270825)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-s2dxk" podUID="bbbc5f2c-9a78-48be-ad3a-0bdc09270825"
	Jan 27 16:01:38 no-preload-458006 kubelet[3613]: E0127 16:01:38.145696    3613 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-k7879" podUID="137f45e8-cf1d-404b-af06-4b99a257450f"
	Jan 27 16:01:38 no-preload-458006 kubelet[3613]: E0127 16:01:38.594174    3613 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993698593775311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:01:38 no-preload-458006 kubelet[3613]: E0127 16:01:38.594308    3613 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993698593775311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [549c80f01ef1298adca735949add29c5d1eeabe85faea5e7fb53a1cb314e0500] <==
	2025/01/27 15:49:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:49:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:50:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:50:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:51:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:51:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:52:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:52:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:53:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:53:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:54:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:54:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:55:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:55:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:56:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:56:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:57:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:57:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:58:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:58:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:59:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:59:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 16:00:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 16:00:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 16:01:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [acc79d4a492235ef0d2dca0fc6903797e24a96ed751e3da9ededda483cd92521] <==
	I0127 15:40:35.155671       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 15:40:35.191609       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 15:40:35.191752       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 15:40:35.210989       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 15:40:35.215707       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-458006_7af2be8e-accf-44a9-9882-2299e3577701!
	I0127 15:40:35.217837       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ec11396f-91b7-48b3-b032-778b6b2ec867", APIVersion:"v1", ResourceVersion:"386", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-458006_7af2be8e-accf-44a9-9882-2299e3577701 became leader
	I0127 15:40:35.315875       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-458006_7af2be8e-accf-44a9-9882-2299e3577701!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-458006 -n no-preload-458006
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-458006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-k7879
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-458006 describe pod metrics-server-f79f97bbb-k7879
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-458006 describe pod metrics-server-f79f97bbb-k7879: exit status 1 (69.5523ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-k7879" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-458006 describe pod metrics-server-f79f97bbb-k7879: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1608.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (1634.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-349782 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 15:35:13.443594 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:16.985621 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:16.992053 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:17.003422 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:17.024911 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:17.066413 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:17.147918 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:17.309550 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:17.631144 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:18.273333 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:19.554652 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:22.116944 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:27.238434 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:29.080275 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-349782 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: signal: killed (27m11.574656567s)

                                                
                                                
-- stdout --
	* [embed-certs-349782] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20321
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "embed-certs-349782" primary control-plane node in "embed-certs-349782" cluster
	* Restarting existing kvm2 VM for "embed-certs-349782" ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-349782 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 15:35:08.790645 1074908 out.go:345] Setting OutFile to fd 1 ...
	I0127 15:35:08.790886 1074908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:35:08.790895 1074908 out.go:358] Setting ErrFile to fd 2...
	I0127 15:35:08.790900 1074908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:35:08.791073 1074908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 15:35:08.791651 1074908 out.go:352] Setting JSON to false
	I0127 15:35:08.792709 1074908 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":22656,"bootTime":1737969453,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 15:35:08.792822 1074908 start.go:139] virtualization: kvm guest
	I0127 15:35:08.794990 1074908 out.go:177] * [embed-certs-349782] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 15:35:08.796558 1074908 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 15:35:08.796598 1074908 notify.go:220] Checking for updates...
	I0127 15:35:08.798968 1074908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 15:35:08.800241 1074908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:35:08.801365 1074908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:35:08.802491 1074908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 15:35:08.803598 1074908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 15:35:08.805280 1074908 config.go:182] Loaded profile config "embed-certs-349782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:35:08.805670 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:35:08.805731 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:35:08.821360 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33401
	I0127 15:35:08.821850 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:35:08.822460 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:35:08.822495 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:35:08.822868 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:35:08.823076 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:35:08.823307 1074908 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 15:35:08.823608 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:35:08.823643 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:35:08.839113 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33437
	I0127 15:35:08.839639 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:35:08.840243 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:35:08.840276 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:35:08.840649 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:35:08.840858 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:35:08.879716 1074908 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 15:35:08.881055 1074908 start.go:297] selected driver: kvm2
	I0127 15:35:08.881073 1074908 start.go:901] validating driver "kvm2" against &{Name:embed-certs-349782 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-349782 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.43 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:35:08.881190 1074908 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 15:35:08.881885 1074908 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:35:08.881961 1074908 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 15:35:08.898571 1074908 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 15:35:08.899108 1074908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 15:35:08.899158 1074908 cni.go:84] Creating CNI manager for ""
	I0127 15:35:08.899236 1074908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:35:08.899300 1074908 start.go:340] cluster config:
	{Name:embed-certs-349782 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-349782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.43 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:35:08.899463 1074908 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:35:08.901502 1074908 out.go:177] * Starting "embed-certs-349782" primary control-plane node in "embed-certs-349782" cluster
	I0127 15:35:08.902914 1074908 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 15:35:08.902965 1074908 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 15:35:08.902974 1074908 cache.go:56] Caching tarball of preloaded images
	I0127 15:35:08.903103 1074908 preload.go:172] Found /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 15:35:08.903116 1074908 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 15:35:08.903212 1074908 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/embed-certs-349782/config.json ...
	I0127 15:35:08.903435 1074908 start.go:360] acquireMachinesLock for embed-certs-349782: {Name:mk884e36253ca066a698970989f20649e5f9cbef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 15:35:12.786559 1074908 start.go:364] duration metric: took 3.883091898s to acquireMachinesLock for "embed-certs-349782"
	I0127 15:35:12.786639 1074908 start.go:96] Skipping create...Using existing machine configuration
	I0127 15:35:12.786652 1074908 fix.go:54] fixHost starting: 
	I0127 15:35:12.787041 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:35:12.787115 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:35:12.805021 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37811
	I0127 15:35:12.805493 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:35:12.806003 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:35:12.806024 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:35:12.806360 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:35:12.806586 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:35:12.806740 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:35:12.808398 1074908 fix.go:112] recreateIfNeeded on embed-certs-349782: state=Stopped err=<nil>
	I0127 15:35:12.808432 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	W0127 15:35:12.808596 1074908 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 15:35:12.810509 1074908 out.go:177] * Restarting existing kvm2 VM for "embed-certs-349782" ...
	I0127 15:35:12.811649 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Start
	I0127 15:35:12.811840 1074908 main.go:141] libmachine: (embed-certs-349782) starting domain...
	I0127 15:35:12.811862 1074908 main.go:141] libmachine: (embed-certs-349782) ensuring networks are active...
	I0127 15:35:12.812533 1074908 main.go:141] libmachine: (embed-certs-349782) Ensuring network default is active
	I0127 15:35:12.812882 1074908 main.go:141] libmachine: (embed-certs-349782) Ensuring network mk-embed-certs-349782 is active
	I0127 15:35:12.813325 1074908 main.go:141] libmachine: (embed-certs-349782) getting domain XML...
	I0127 15:35:12.813985 1074908 main.go:141] libmachine: (embed-certs-349782) creating domain...
	I0127 15:35:14.081116 1074908 main.go:141] libmachine: (embed-certs-349782) waiting for IP...
	I0127 15:35:14.082316 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:14.082883 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | unable to find current IP address of domain embed-certs-349782 in network mk-embed-certs-349782
	I0127 15:35:14.082983 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | I0127 15:35:14.082872 1074961 retry.go:31] will retry after 271.174899ms: waiting for domain to come up
	I0127 15:35:14.355233 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:14.355957 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | unable to find current IP address of domain embed-certs-349782 in network mk-embed-certs-349782
	I0127 15:35:14.355978 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | I0127 15:35:14.355919 1074961 retry.go:31] will retry after 285.189204ms: waiting for domain to come up
	I0127 15:35:14.642575 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:14.643206 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | unable to find current IP address of domain embed-certs-349782 in network mk-embed-certs-349782
	I0127 15:35:14.643240 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | I0127 15:35:14.643150 1074961 retry.go:31] will retry after 300.554416ms: waiting for domain to come up
	I0127 15:35:14.945793 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:14.946349 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | unable to find current IP address of domain embed-certs-349782 in network mk-embed-certs-349782
	I0127 15:35:14.946372 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | I0127 15:35:14.946334 1074961 retry.go:31] will retry after 542.185053ms: waiting for domain to come up
	I0127 15:35:15.490115 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:15.490700 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | unable to find current IP address of domain embed-certs-349782 in network mk-embed-certs-349782
	I0127 15:35:15.490733 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | I0127 15:35:15.490661 1074961 retry.go:31] will retry after 499.58125ms: waiting for domain to come up
	I0127 15:35:15.991665 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:15.992317 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | unable to find current IP address of domain embed-certs-349782 in network mk-embed-certs-349782
	I0127 15:35:15.992370 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | I0127 15:35:15.992299 1074961 retry.go:31] will retry after 812.785188ms: waiting for domain to come up
	I0127 15:35:16.807318 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:16.807837 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | unable to find current IP address of domain embed-certs-349782 in network mk-embed-certs-349782
	I0127 15:35:16.807862 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | I0127 15:35:16.807805 1074961 retry.go:31] will retry after 833.577468ms: waiting for domain to come up
	I0127 15:35:17.642700 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:17.643346 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | unable to find current IP address of domain embed-certs-349782 in network mk-embed-certs-349782
	I0127 15:35:17.643403 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | I0127 15:35:17.643267 1074961 retry.go:31] will retry after 1.272532302s: waiting for domain to come up
	I0127 15:35:18.916954 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:18.917547 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | unable to find current IP address of domain embed-certs-349782 in network mk-embed-certs-349782
	I0127 15:35:18.917580 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | I0127 15:35:18.917488 1074961 retry.go:31] will retry after 1.526613289s: waiting for domain to come up
	I0127 15:35:20.446142 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:20.446660 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | unable to find current IP address of domain embed-certs-349782 in network mk-embed-certs-349782
	I0127 15:35:20.446683 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | I0127 15:35:20.446623 1074961 retry.go:31] will retry after 1.778473314s: waiting for domain to come up
	I0127 15:35:22.226618 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:22.227205 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | unable to find current IP address of domain embed-certs-349782 in network mk-embed-certs-349782
	I0127 15:35:22.227237 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | I0127 15:35:22.227184 1074961 retry.go:31] will retry after 1.946376913s: waiting for domain to come up
	I0127 15:35:24.175854 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:24.176486 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | unable to find current IP address of domain embed-certs-349782 in network mk-embed-certs-349782
	I0127 15:35:24.176568 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | I0127 15:35:24.176470 1074961 retry.go:31] will retry after 2.779449383s: waiting for domain to come up
	I0127 15:35:26.957168 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:26.957694 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | unable to find current IP address of domain embed-certs-349782 in network mk-embed-certs-349782
	I0127 15:35:26.957719 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | I0127 15:35:26.957626 1074961 retry.go:31] will retry after 2.797046845s: waiting for domain to come up
	I0127 15:35:29.756730 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:29.757180 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | unable to find current IP address of domain embed-certs-349782 in network mk-embed-certs-349782
	I0127 15:35:29.757226 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | I0127 15:35:29.757157 1074961 retry.go:31] will retry after 3.930616231s: waiting for domain to come up
	I0127 15:35:33.689708 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:33.690256 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has current primary IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:33.690315 1074908 main.go:141] libmachine: (embed-certs-349782) found domain IP: 192.168.61.43
	I0127 15:35:33.690358 1074908 main.go:141] libmachine: (embed-certs-349782) reserving static IP address...
	I0127 15:35:33.690840 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "embed-certs-349782", mac: "52:54:00:47:3b:df", ip: "192.168.61.43"} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:35:33.690863 1074908 main.go:141] libmachine: (embed-certs-349782) reserved static IP address 192.168.61.43 for domain embed-certs-349782
	I0127 15:35:33.690886 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | skip adding static IP to network mk-embed-certs-349782 - found existing host DHCP lease matching {name: "embed-certs-349782", mac: "52:54:00:47:3b:df", ip: "192.168.61.43"}
	I0127 15:35:33.690902 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Getting to WaitForSSH function...
	I0127 15:35:33.690917 1074908 main.go:141] libmachine: (embed-certs-349782) waiting for SSH...
	I0127 15:35:33.693564 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:33.693952 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:35:33.694000 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:33.694106 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Using SSH client type: external
	I0127 15:35:33.694142 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Using SSH private key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa (-rw-------)
	I0127 15:35:33.694192 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 15:35:33.694219 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | About to run SSH command:
	I0127 15:35:33.694234 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | exit 0
	I0127 15:35:33.825221 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | SSH cmd err, output: <nil>: 
	I0127 15:35:33.825627 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetConfigRaw
	I0127 15:35:33.826321 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetIP
	I0127 15:35:33.829163 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:33.829668 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:35:33.829711 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:33.829977 1074908 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/embed-certs-349782/config.json ...
	I0127 15:35:33.830175 1074908 machine.go:93] provisionDockerMachine start ...
	I0127 15:35:33.830193 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:35:33.830415 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:35:33.832952 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:33.833398 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:35:33.833429 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:33.833582 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:35:33.833762 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:35:33.833943 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:35:33.834141 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:35:33.834340 1074908 main.go:141] libmachine: Using SSH client type: native
	I0127 15:35:33.834636 1074908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.43 22 <nil> <nil>}
	I0127 15:35:33.834657 1074908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 15:35:33.953890 1074908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 15:35:33.953927 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetMachineName
	I0127 15:35:33.954206 1074908 buildroot.go:166] provisioning hostname "embed-certs-349782"
	I0127 15:35:33.954244 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetMachineName
	I0127 15:35:33.954447 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:35:33.957546 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:33.957905 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:35:33.957934 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:33.958054 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:35:33.958257 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:35:33.958461 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:35:33.958632 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:35:33.958811 1074908 main.go:141] libmachine: Using SSH client type: native
	I0127 15:35:33.959001 1074908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.43 22 <nil> <nil>}
	I0127 15:35:33.959014 1074908 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-349782 && echo "embed-certs-349782" | sudo tee /etc/hostname
	I0127 15:35:34.105761 1074908 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-349782
	
	I0127 15:35:34.105804 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:35:34.108836 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:34.109221 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:35:34.109251 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:34.109415 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:35:34.109628 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:35:34.109775 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:35:34.109928 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:35:34.110102 1074908 main.go:141] libmachine: Using SSH client type: native
	I0127 15:35:34.110293 1074908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.43 22 <nil> <nil>}
	I0127 15:35:34.110310 1074908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-349782' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-349782/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-349782' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 15:35:34.231144 1074908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 15:35:34.231187 1074908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20321-1005652/.minikube CaCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20321-1005652/.minikube}
	I0127 15:35:34.231240 1074908 buildroot.go:174] setting up certificates
	I0127 15:35:34.231258 1074908 provision.go:84] configureAuth start
	I0127 15:35:34.231285 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetMachineName
	I0127 15:35:34.231582 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetIP
	I0127 15:35:34.234614 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:34.234972 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:35:34.234999 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:34.235145 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:35:34.237689 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:34.238083 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:35:34.238106 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:34.238274 1074908 provision.go:143] copyHostCerts
	I0127 15:35:34.238339 1074908 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem, removing ...
	I0127 15:35:34.238363 1074908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem
	I0127 15:35:34.238431 1074908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem (1078 bytes)
	I0127 15:35:34.238554 1074908 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem, removing ...
	I0127 15:35:34.238565 1074908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem
	I0127 15:35:34.238591 1074908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem (1123 bytes)
	I0127 15:35:34.238729 1074908 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem, removing ...
	I0127 15:35:34.238742 1074908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem
	I0127 15:35:34.238768 1074908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem (1679 bytes)
	I0127 15:35:34.238843 1074908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem org=jenkins.embed-certs-349782 san=[127.0.0.1 192.168.61.43 embed-certs-349782 localhost minikube]
	I0127 15:35:34.361936 1074908 provision.go:177] copyRemoteCerts
	I0127 15:35:34.361996 1074908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 15:35:34.362024 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:35:34.364878 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:34.365360 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:35:34.365390 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:34.365575 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:35:34.365760 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:35:34.365973 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:35:34.366163 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:35:34.451338 1074908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0127 15:35:34.475907 1074908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 15:35:34.500789 1074908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 15:35:34.528055 1074908 provision.go:87] duration metric: took 296.782484ms to configureAuth
	I0127 15:35:34.528103 1074908 buildroot.go:189] setting minikube options for container-runtime
	I0127 15:35:34.528365 1074908 config.go:182] Loaded profile config "embed-certs-349782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:35:34.528468 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:35:34.531464 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:34.531798 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:35:34.531834 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:34.531972 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:35:34.532181 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:35:34.532392 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:35:34.532567 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:35:34.532776 1074908 main.go:141] libmachine: Using SSH client type: native
	I0127 15:35:34.533076 1074908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.43 22 <nil> <nil>}
	I0127 15:35:34.533100 1074908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 15:35:34.774794 1074908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 15:35:34.774829 1074908 machine.go:96] duration metric: took 944.638371ms to provisionDockerMachine
	I0127 15:35:34.774846 1074908 start.go:293] postStartSetup for "embed-certs-349782" (driver="kvm2")
	I0127 15:35:34.774859 1074908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 15:35:34.774878 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:35:34.775234 1074908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 15:35:34.775273 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:35:34.777687 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:34.778009 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:35:34.778039 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:34.778276 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:35:34.778451 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:35:34.778561 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:35:34.778720 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:35:34.862963 1074908 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 15:35:34.867325 1074908 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 15:35:34.867350 1074908 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/addons for local assets ...
	I0127 15:35:34.867416 1074908 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/files for local assets ...
	I0127 15:35:34.867501 1074908 filesync.go:149] local asset: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem -> 10128162.pem in /etc/ssl/certs
	I0127 15:35:34.867610 1074908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 15:35:34.876823 1074908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:35:34.900876 1074908 start.go:296] duration metric: took 126.005623ms for postStartSetup
	I0127 15:35:34.900927 1074908 fix.go:56] duration metric: took 22.114276404s for fixHost
	I0127 15:35:34.900952 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:35:34.903343 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:34.903647 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:35:34.903701 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:34.903848 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:35:34.904097 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:35:34.904241 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:35:34.904386 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:35:34.904516 1074908 main.go:141] libmachine: Using SSH client type: native
	I0127 15:35:34.904696 1074908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.43 22 <nil> <nil>}
	I0127 15:35:34.904706 1074908 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 15:35:35.018150 1074908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737992134.990785457
	
	I0127 15:35:35.018176 1074908 fix.go:216] guest clock: 1737992134.990785457
	I0127 15:35:35.018184 1074908 fix.go:229] Guest: 2025-01-27 15:35:34.990785457 +0000 UTC Remote: 2025-01-27 15:35:34.900932734 +0000 UTC m=+26.150992813 (delta=89.852723ms)
	I0127 15:35:35.018205 1074908 fix.go:200] guest clock delta is within tolerance: 89.852723ms
	I0127 15:35:35.018211 1074908 start.go:83] releasing machines lock for "embed-certs-349782", held for 22.231605871s
	I0127 15:35:35.018235 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:35:35.018594 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetIP
	I0127 15:35:35.021312 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:35.021639 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:35:35.021668 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:35.021829 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:35:35.022368 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:35:35.022553 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:35:35.022651 1074908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 15:35:35.022691 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:35:35.022749 1074908 ssh_runner.go:195] Run: cat /version.json
	I0127 15:35:35.022770 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:35:35.025603 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:35.025854 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:35.026064 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:35:35.026088 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:35.026207 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:35:35.026227 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:35.026230 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:35:35.026429 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:35:35.026442 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:35:35.026585 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:35:35.026744 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:35:35.026773 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:35:35.026908 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:35:35.027045 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:35:35.106755 1074908 ssh_runner.go:195] Run: systemctl --version
	I0127 15:35:35.148850 1074908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 15:35:35.293795 1074908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 15:35:35.300066 1074908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 15:35:35.300147 1074908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 15:35:35.317172 1074908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 15:35:35.317208 1074908 start.go:495] detecting cgroup driver to use...
	I0127 15:35:35.317314 1074908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 15:35:35.334466 1074908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 15:35:35.349419 1074908 docker.go:217] disabling cri-docker service (if available) ...
	I0127 15:35:35.349490 1074908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 15:35:35.365141 1074908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 15:35:35.380108 1074908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 15:35:35.499027 1074908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 15:35:35.656105 1074908 docker.go:233] disabling docker service ...
	I0127 15:35:35.656181 1074908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 15:35:35.679360 1074908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 15:35:35.698589 1074908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 15:35:35.859904 1074908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 15:35:36.002297 1074908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 15:35:36.016778 1074908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 15:35:36.038419 1074908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 15:35:36.038504 1074908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:36.053404 1074908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 15:35:36.053602 1074908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:36.066984 1074908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:36.078194 1074908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:36.090754 1074908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 15:35:36.103005 1074908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:36.118639 1074908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:36.146011 1074908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:36.161784 1074908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 15:35:36.172908 1074908 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 15:35:36.172986 1074908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 15:35:36.187538 1074908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 15:35:36.198853 1074908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:35:36.350727 1074908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 15:35:36.446266 1074908 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 15:35:36.446362 1074908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 15:35:36.451395 1074908 start.go:563] Will wait 60s for crictl version
	I0127 15:35:36.451475 1074908 ssh_runner.go:195] Run: which crictl
	I0127 15:35:36.455553 1074908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 15:35:36.496471 1074908 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 15:35:36.496581 1074908 ssh_runner.go:195] Run: crio --version
	I0127 15:35:36.531906 1074908 ssh_runner.go:195] Run: crio --version
	I0127 15:35:36.574841 1074908 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 15:35:36.576167 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetIP
	I0127 15:35:36.579812 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:36.580248 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:35:36.580295 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:35:36.580578 1074908 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 15:35:36.585700 1074908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:35:36.599245 1074908 kubeadm.go:883] updating cluster {Name:embed-certs-349782 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-349782 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.43 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 15:35:36.599406 1074908 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 15:35:36.599469 1074908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:35:36.643886 1074908 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 15:35:36.643977 1074908 ssh_runner.go:195] Run: which lz4
	I0127 15:35:36.650277 1074908 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 15:35:36.655137 1074908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 15:35:36.655174 1074908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 15:35:38.251646 1074908 crio.go:462] duration metric: took 1.601417426s to copy over tarball
	I0127 15:35:38.251746 1074908 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 15:35:40.835830 1074908 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.58405251s)
	I0127 15:35:40.835860 1074908 crio.go:469] duration metric: took 2.58417527s to extract the tarball
	I0127 15:35:40.835871 1074908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 15:35:40.874959 1074908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:35:40.929168 1074908 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 15:35:40.929196 1074908 cache_images.go:84] Images are preloaded, skipping loading
	I0127 15:35:40.929205 1074908 kubeadm.go:934] updating node { 192.168.61.43 8443 v1.32.1 crio true true} ...
	I0127 15:35:40.929319 1074908 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-349782 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:embed-certs-349782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 15:35:40.929391 1074908 ssh_runner.go:195] Run: crio config
	I0127 15:35:40.983910 1074908 cni.go:84] Creating CNI manager for ""
	I0127 15:35:40.983936 1074908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:35:40.983946 1074908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 15:35:40.983977 1074908 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.43 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-349782 NodeName:embed-certs-349782 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.43"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 15:35:40.984140 1074908 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.43
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-349782"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.43"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.43"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 15:35:40.984215 1074908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 15:35:40.996600 1074908 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 15:35:40.996673 1074908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 15:35:41.008804 1074908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0127 15:35:41.028301 1074908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 15:35:41.048805 1074908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0127 15:35:41.067719 1074908 ssh_runner.go:195] Run: grep 192.168.61.43	control-plane.minikube.internal$ /etc/hosts
	I0127 15:35:41.071870 1074908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.43	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:35:41.086188 1074908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:35:41.217561 1074908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:35:41.235999 1074908 certs.go:68] Setting up /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/embed-certs-349782 for IP: 192.168.61.43
	I0127 15:35:41.236033 1074908 certs.go:194] generating shared ca certs ...
	I0127 15:35:41.236058 1074908 certs.go:226] acquiring lock for ca certs: {Name:mk0f815247e4bef2bde3ddcae95639d1cd9cab24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:35:41.236280 1074908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key
	I0127 15:35:41.236347 1074908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key
	I0127 15:35:41.236364 1074908 certs.go:256] generating profile certs ...
	I0127 15:35:41.236507 1074908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/embed-certs-349782/client.key
	I0127 15:35:41.236591 1074908 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/embed-certs-349782/apiserver.key.152d6404
	I0127 15:35:41.236646 1074908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/embed-certs-349782/proxy-client.key
	I0127 15:35:41.236810 1074908 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem (1338 bytes)
	W0127 15:35:41.236856 1074908 certs.go:480] ignoring /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816_empty.pem, impossibly tiny 0 bytes
	I0127 15:35:41.236869 1074908 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 15:35:41.236904 1074908 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem (1078 bytes)
	I0127 15:35:41.236939 1074908 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem (1123 bytes)
	I0127 15:35:41.236966 1074908 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem (1679 bytes)
	I0127 15:35:41.237060 1074908 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:35:41.237938 1074908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 15:35:41.277293 1074908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 15:35:41.313350 1074908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 15:35:41.351608 1074908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 15:35:41.382884 1074908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/embed-certs-349782/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0127 15:35:41.420092 1074908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/embed-certs-349782/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 15:35:41.461682 1074908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/embed-certs-349782/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 15:35:41.496483 1074908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/embed-certs-349782/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 15:35:41.530930 1074908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem --> /usr/share/ca-certificates/1012816.pem (1338 bytes)
	I0127 15:35:41.561606 1074908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /usr/share/ca-certificates/10128162.pem (1708 bytes)
	I0127 15:35:41.592429 1074908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 15:35:41.621477 1074908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 15:35:41.643488 1074908 ssh_runner.go:195] Run: openssl version
	I0127 15:35:41.652242 1074908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1012816.pem && ln -fs /usr/share/ca-certificates/1012816.pem /etc/ssl/certs/1012816.pem"
	I0127 15:35:41.665152 1074908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1012816.pem
	I0127 15:35:41.671591 1074908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 14:20 /usr/share/ca-certificates/1012816.pem
	I0127 15:35:41.671677 1074908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1012816.pem
	I0127 15:35:41.679315 1074908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1012816.pem /etc/ssl/certs/51391683.0"
	I0127 15:35:41.691840 1074908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10128162.pem && ln -fs /usr/share/ca-certificates/10128162.pem /etc/ssl/certs/10128162.pem"
	I0127 15:35:41.703629 1074908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10128162.pem
	I0127 15:35:41.709988 1074908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 14:20 /usr/share/ca-certificates/10128162.pem
	I0127 15:35:41.710086 1074908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10128162.pem
	I0127 15:35:41.716137 1074908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10128162.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 15:35:41.732490 1074908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 15:35:41.745586 1074908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:35:41.752273 1074908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 14:06 /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:35:41.752350 1074908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:35:41.760604 1074908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 15:35:41.776911 1074908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 15:35:41.782107 1074908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 15:35:41.789027 1074908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 15:35:41.797628 1074908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 15:35:41.806592 1074908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 15:35:41.815693 1074908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 15:35:41.824766 1074908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 15:35:41.833589 1074908 kubeadm.go:392] StartCluster: {Name:embed-certs-349782 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-349782 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.43 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:35:41.833725 1074908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 15:35:41.833794 1074908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:35:41.890699 1074908 cri.go:89] found id: ""
	I0127 15:35:41.890769 1074908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 15:35:41.902615 1074908 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 15:35:41.902635 1074908 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 15:35:41.902681 1074908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 15:35:41.914323 1074908 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 15:35:42.246374 1074908 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-349782" does not appear in /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:35:42.246722 1074908 kubeconfig.go:62] /home/jenkins/minikube-integration/20321-1005652/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-349782" cluster setting kubeconfig missing "embed-certs-349782" context setting]
	I0127 15:35:42.247277 1074908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:35:42.308735 1074908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 15:35:42.333940 1074908 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.43
	I0127 15:35:42.333990 1074908 kubeadm.go:1160] stopping kube-system containers ...
	I0127 15:35:42.334008 1074908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 15:35:42.334071 1074908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:35:42.388048 1074908 cri.go:89] found id: ""
	I0127 15:35:42.388225 1074908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 15:35:42.407345 1074908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:35:42.419254 1074908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:35:42.419289 1074908 kubeadm.go:157] found existing configuration files:
	
	I0127 15:35:42.419348 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:35:42.429547 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:35:42.429630 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:35:42.440765 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:35:42.451108 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:35:42.451174 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:35:42.462165 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:35:42.472436 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:35:42.472506 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:35:42.483810 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:35:42.494287 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:35:42.494348 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:35:42.505914 1074908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:35:42.518248 1074908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:35:42.738255 1074908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:35:44.027192 1074908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.28883898s)
	I0127 15:35:44.027226 1074908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:35:44.283412 1074908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:35:44.369444 1074908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:35:44.478825 1074908 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:35:44.478931 1074908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:35:44.979431 1074908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:35:45.479704 1074908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:35:45.547970 1074908 api_server.go:72] duration metric: took 1.069145984s to wait for apiserver process to appear ...
	I0127 15:35:45.548007 1074908 api_server.go:88] waiting for apiserver healthz status ...
	I0127 15:35:45.548034 1074908 api_server.go:253] Checking apiserver healthz at https://192.168.61.43:8443/healthz ...
	I0127 15:35:45.548699 1074908 api_server.go:269] stopped: https://192.168.61.43:8443/healthz: Get "https://192.168.61.43:8443/healthz": dial tcp 192.168.61.43:8443: connect: connection refused
	I0127 15:35:46.048192 1074908 api_server.go:253] Checking apiserver healthz at https://192.168.61.43:8443/healthz ...
	I0127 15:35:48.444603 1074908 api_server.go:279] https://192.168.61.43:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 15:35:48.444644 1074908 api_server.go:103] status: https://192.168.61.43:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 15:35:48.444664 1074908 api_server.go:253] Checking apiserver healthz at https://192.168.61.43:8443/healthz ...
	I0127 15:35:48.567804 1074908 api_server.go:279] https://192.168.61.43:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 15:35:48.567842 1074908 api_server.go:103] status: https://192.168.61.43:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 15:35:48.567861 1074908 api_server.go:253] Checking apiserver healthz at https://192.168.61.43:8443/healthz ...
	I0127 15:35:48.580619 1074908 api_server.go:279] https://192.168.61.43:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 15:35:48.580652 1074908 api_server.go:103] status: https://192.168.61.43:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 15:35:49.048318 1074908 api_server.go:253] Checking apiserver healthz at https://192.168.61.43:8443/healthz ...
	I0127 15:35:49.054364 1074908 api_server.go:279] https://192.168.61.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 15:35:49.054400 1074908 api_server.go:103] status: https://192.168.61.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 15:35:49.548602 1074908 api_server.go:253] Checking apiserver healthz at https://192.168.61.43:8443/healthz ...
	I0127 15:35:49.556184 1074908 api_server.go:279] https://192.168.61.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 15:35:49.556225 1074908 api_server.go:103] status: https://192.168.61.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 15:35:50.048908 1074908 api_server.go:253] Checking apiserver healthz at https://192.168.61.43:8443/healthz ...
	I0127 15:35:50.056068 1074908 api_server.go:279] https://192.168.61.43:8443/healthz returned 200:
	ok
	I0127 15:35:50.066559 1074908 api_server.go:141] control plane version: v1.32.1
	I0127 15:35:50.066587 1074908 api_server.go:131] duration metric: took 4.518573183s to wait for apiserver health ...
	I0127 15:35:50.066597 1074908 cni.go:84] Creating CNI manager for ""
	I0127 15:35:50.066604 1074908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:35:50.068247 1074908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 15:35:50.069468 1074908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 15:35:50.094086 1074908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 15:35:50.123063 1074908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 15:35:50.135126 1074908 system_pods.go:59] 8 kube-system pods found
	I0127 15:35:50.135186 1074908 system_pods.go:61] "coredns-668d6bf9bc-sscb5" [7922264b-a547-4622-bbac-5e2502e1a103] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 15:35:50.135198 1074908 system_pods.go:61] "etcd-embed-certs-349782" [f5cfd244-6583-48bf-9eba-4ddabd695fd0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 15:35:50.135210 1074908 system_pods.go:61] "kube-apiserver-embed-certs-349782" [36a6a554-54db-4563-aa9f-21628b2db1af] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 15:35:50.135219 1074908 system_pods.go:61] "kube-controller-manager-embed-certs-349782" [2e5ed4f4-26d0-47f8-9a99-ac79d593c191] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 15:35:50.135228 1074908 system_pods.go:61] "kube-proxy-dxcvx" [3fa13610-5d30-4c4f-b704-e7aade289a63] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 15:35:50.135242 1074908 system_pods.go:61] "kube-scheduler-embed-certs-349782" [bada41e0-d360-4a84-9495-f87f4d7b805c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 15:35:50.135268 1074908 system_pods.go:61] "metrics-server-f79f97bbb-vskgz" [35070aad-6a8c-48c1-a1d7-e6ef04984c4d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 15:35:50.135279 1074908 system_pods.go:61] "storage-provisioner" [47b0b36b-99af-49d1-bc47-ef48daa58f61] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 15:35:50.135291 1074908 system_pods.go:74] duration metric: took 12.201833ms to wait for pod list to return data ...
	I0127 15:35:50.135306 1074908 node_conditions.go:102] verifying NodePressure condition ...
	I0127 15:35:50.139798 1074908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 15:35:50.139849 1074908 node_conditions.go:123] node cpu capacity is 2
	I0127 15:35:50.139864 1074908 node_conditions.go:105] duration metric: took 4.552306ms to run NodePressure ...
	I0127 15:35:50.139890 1074908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:35:50.466420 1074908 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 15:35:50.470755 1074908 kubeadm.go:739] kubelet initialised
	I0127 15:35:50.470781 1074908 kubeadm.go:740] duration metric: took 4.332329ms waiting for restarted kubelet to initialise ...
	I0127 15:35:50.470793 1074908 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:35:50.475406 1074908 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-sscb5" in "kube-system" namespace to be "Ready" ...
	I0127 15:35:50.480557 1074908 pod_ready.go:98] node "embed-certs-349782" hosting pod "coredns-668d6bf9bc-sscb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-349782" has status "Ready":"False"
	I0127 15:35:50.480580 1074908 pod_ready.go:82] duration metric: took 5.150512ms for pod "coredns-668d6bf9bc-sscb5" in "kube-system" namespace to be "Ready" ...
	E0127 15:35:50.480591 1074908 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-349782" hosting pod "coredns-668d6bf9bc-sscb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-349782" has status "Ready":"False"
	I0127 15:35:50.480600 1074908 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:35:50.484475 1074908 pod_ready.go:98] node "embed-certs-349782" hosting pod "etcd-embed-certs-349782" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-349782" has status "Ready":"False"
	I0127 15:35:50.484495 1074908 pod_ready.go:82] duration metric: took 3.886196ms for pod "etcd-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	E0127 15:35:50.484505 1074908 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-349782" hosting pod "etcd-embed-certs-349782" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-349782" has status "Ready":"False"
	I0127 15:35:50.484512 1074908 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:35:50.489380 1074908 pod_ready.go:98] node "embed-certs-349782" hosting pod "kube-apiserver-embed-certs-349782" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-349782" has status "Ready":"False"
	I0127 15:35:50.489402 1074908 pod_ready.go:82] duration metric: took 4.881993ms for pod "kube-apiserver-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	E0127 15:35:50.489410 1074908 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-349782" hosting pod "kube-apiserver-embed-certs-349782" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-349782" has status "Ready":"False"
	I0127 15:35:50.489416 1074908 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:35:52.497122 1074908 pod_ready.go:103] pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace has status "Ready":"False"
	I0127 15:35:54.502869 1074908 pod_ready.go:103] pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace has status "Ready":"False"
	I0127 15:35:56.997319 1074908 pod_ready.go:103] pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace has status "Ready":"False"
	I0127 15:35:59.498404 1074908 pod_ready.go:103] pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:01.500270 1074908 pod_ready.go:103] pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:05.044484 1074908 pod_ready.go:93] pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace has status "Ready":"True"
	I0127 15:36:05.044511 1074908 pod_ready.go:82] duration metric: took 14.555086846s for pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:36:05.044535 1074908 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dxcvx" in "kube-system" namespace to be "Ready" ...
	I0127 15:36:05.062271 1074908 pod_ready.go:93] pod "kube-proxy-dxcvx" in "kube-system" namespace has status "Ready":"True"
	I0127 15:36:05.062305 1074908 pod_ready.go:82] duration metric: took 17.761467ms for pod "kube-proxy-dxcvx" in "kube-system" namespace to be "Ready" ...
	I0127 15:36:05.062320 1074908 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:36:05.074188 1074908 pod_ready.go:93] pod "kube-scheduler-embed-certs-349782" in "kube-system" namespace has status "Ready":"True"
	I0127 15:36:05.074213 1074908 pod_ready.go:82] duration metric: took 11.885382ms for pod "kube-scheduler-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:36:05.074223 1074908 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace to be "Ready" ...
	I0127 15:36:07.082846 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:09.584641 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:12.081344 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:14.082395 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:16.581345 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:18.581477 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:21.080799 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:23.580922 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:26.080702 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:28.081168 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:30.580254 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:32.580805 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:34.580893 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:37.080571 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:39.581067 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:42.080554 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:44.580432 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:46.580941 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:48.581204 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:50.581369 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:53.081776 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:55.082494 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:57.581057 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:59.582142 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:01.582480 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:04.080795 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:06.581337 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:09.081549 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:11.582465 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:14.081299 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:16.082026 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:18.582474 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:21.080832 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:23.582319 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:26.080703 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:28.581510 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:30.581647 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:33.080881 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:35.582126 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:38.081587 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:40.580606 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:42.581554 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:45.080412 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:47.081908 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:49.582309 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:52.080439 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:54.081369 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:56.581569 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:58.582848 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:00.583568 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:03.081418 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:05.581452 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:07.587869 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:10.085186 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:12.580963 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:14.581198 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:16.581669 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:19.082119 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:21.583597 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:24.081574 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:26.084881 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:28.581361 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:31.080211 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:33.582580 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:36.081107 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:38.582581 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:41.080457 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:43.082314 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:45.581743 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:47.582153 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:50.081002 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:52.581311 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:54.581795 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:57.080722 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:59.581769 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:02.080943 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:04.081681 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:06.582635 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:09.080839 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:11.581560 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:14.080371 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:16.582575 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:18.584549 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:21.080013 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:23.080298 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:25.081823 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:27.581035 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:30.082518 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:32.580263 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:34.587256 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:37.080093 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:39.580910 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:41.581608 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:43.587620 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:46.079379 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:48.081369 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:50.581346 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:53.080934 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:55.082397 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:57.581203 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:00.081973 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:02.581659 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:05.075392 1074908 pod_ready.go:82] duration metric: took 4m0.001148212s for pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace to be "Ready" ...
	E0127 15:40:05.075435 1074908 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 15:40:05.075460 1074908 pod_ready.go:39] duration metric: took 4m14.604653981s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:05.075504 1074908 kubeadm.go:597] duration metric: took 4m23.17285487s to restartPrimaryControlPlane
	W0127 15:40:05.075610 1074908 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 15:40:05.075649 1074908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:40:32.977057 1074908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.901370931s)
	I0127 15:40:32.977156 1074908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:40:32.998093 1074908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:40:33.014544 1074908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:40:33.041108 1074908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:40:33.041138 1074908 kubeadm.go:157] found existing configuration files:
	
	I0127 15:40:33.041203 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:40:33.058390 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:40:33.058462 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:40:33.070074 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:40:33.087447 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:40:33.087524 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:40:33.101890 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:40:33.112384 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:40:33.112460 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:40:33.122774 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:40:33.133115 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:40:33.133183 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:40:33.143719 1074908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:40:33.201432 1074908 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 15:40:33.201519 1074908 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:40:33.371439 1074908 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:40:33.371619 1074908 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:40:33.371746 1074908 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 15:40:33.380800 1074908 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:40:33.383521 1074908 out.go:235]   - Generating certificates and keys ...
	I0127 15:40:33.383651 1074908 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:40:33.383757 1074908 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:40:33.383895 1074908 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:40:33.383985 1074908 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:40:33.384074 1074908 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:40:33.384147 1074908 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:40:33.384245 1074908 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:40:33.384323 1074908 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:40:33.384413 1074908 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:40:33.384510 1074908 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:40:33.384563 1074908 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:40:33.384642 1074908 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:40:33.553965 1074908 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:40:33.739507 1074908 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 15:40:33.994637 1074908 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:40:34.154265 1074908 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:40:34.373069 1074908 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:40:34.373791 1074908 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:40:34.379843 1074908 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:40:34.381325 1074908 out.go:235]   - Booting up control plane ...
	I0127 15:40:34.381471 1074908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:40:34.381579 1074908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:40:34.382092 1074908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:40:34.406494 1074908 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:40:34.413899 1074908 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:40:34.414029 1074908 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:40:34.583151 1074908 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 15:40:34.583269 1074908 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 15:40:35.584905 1074908 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001687336s
	I0127 15:40:35.585033 1074908 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 15:40:40.587681 1074908 kubeadm.go:310] [api-check] The API server is healthy after 5.001284493s
	I0127 15:40:40.610814 1074908 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 15:40:40.631959 1074908 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 15:40:40.691115 1074908 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 15:40:40.691368 1074908 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-349782 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 15:40:40.717976 1074908 kubeadm.go:310] [bootstrap-token] Using token: 2miseq.yzn49d7krpbx0jxu
	I0127 15:40:40.719603 1074908 out.go:235]   - Configuring RBAC rules ...
	I0127 15:40:40.719764 1074908 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 15:40:40.734536 1074908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 15:40:40.754140 1074908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 15:40:40.763500 1074908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 15:40:40.769897 1074908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 15:40:40.777335 1074908 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 15:40:40.995105 1074908 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 15:40:41.449029 1074908 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 15:40:41.995223 1074908 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 15:40:41.996543 1074908 kubeadm.go:310] 
	I0127 15:40:41.996660 1074908 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 15:40:41.996672 1074908 kubeadm.go:310] 
	I0127 15:40:41.996788 1074908 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 15:40:41.996798 1074908 kubeadm.go:310] 
	I0127 15:40:41.996838 1074908 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 15:40:41.996921 1074908 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 15:40:41.996994 1074908 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 15:40:41.997025 1074908 kubeadm.go:310] 
	I0127 15:40:41.997151 1074908 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 15:40:41.997173 1074908 kubeadm.go:310] 
	I0127 15:40:41.997241 1074908 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 15:40:41.997253 1074908 kubeadm.go:310] 
	I0127 15:40:41.997329 1074908 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 15:40:41.997435 1074908 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 15:40:41.997539 1074908 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 15:40:41.997547 1074908 kubeadm.go:310] 
	I0127 15:40:41.997672 1074908 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 15:40:41.997777 1074908 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 15:40:41.997789 1074908 kubeadm.go:310] 
	I0127 15:40:41.997873 1074908 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2miseq.yzn49d7krpbx0jxu \
	I0127 15:40:41.997954 1074908 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 \
	I0127 15:40:41.997974 1074908 kubeadm.go:310] 	--control-plane 
	I0127 15:40:41.997980 1074908 kubeadm.go:310] 
	I0127 15:40:41.998045 1074908 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 15:40:41.998056 1074908 kubeadm.go:310] 
	I0127 15:40:41.998117 1074908 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2miseq.yzn49d7krpbx0jxu \
	I0127 15:40:41.998204 1074908 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 
	I0127 15:40:41.999397 1074908 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:40:41.999437 1074908 cni.go:84] Creating CNI manager for ""
	I0127 15:40:41.999448 1074908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:40:42.001383 1074908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 15:40:42.002886 1074908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 15:40:42.019774 1074908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 15:40:42.041710 1074908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 15:40:42.041880 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:42.042011 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-349782 minikube.k8s.io/updated_at=2025_01_27T15_40_42_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743 minikube.k8s.io/name=embed-certs-349782 minikube.k8s.io/primary=true
	I0127 15:40:42.071903 1074908 ops.go:34] apiserver oom_adj: -16
	I0127 15:40:42.298644 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:42.799727 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:43.299289 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:43.799485 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:44.299597 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:44.799559 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:45.299631 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:45.388381 1074908 kubeadm.go:1113] duration metric: took 3.346560313s to wait for elevateKubeSystemPrivileges
	I0127 15:40:45.388421 1074908 kubeadm.go:394] duration metric: took 5m3.554845692s to StartCluster
	I0127 15:40:45.388444 1074908 settings.go:142] acquiring lock: {Name:mk76722a8563186e0a733747d866232f054026c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:40:45.388536 1074908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:40:45.390768 1074908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:40:45.391081 1074908 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.43 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 15:40:45.391145 1074908 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 15:40:45.391269 1074908 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-349782"
	I0127 15:40:45.391288 1074908 addons.go:69] Setting dashboard=true in profile "embed-certs-349782"
	I0127 15:40:45.391320 1074908 addons.go:238] Setting addon dashboard=true in "embed-certs-349782"
	I0127 15:40:45.391319 1074908 addons.go:69] Setting metrics-server=true in profile "embed-certs-349782"
	I0127 15:40:45.391294 1074908 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-349782"
	I0127 15:40:45.391334 1074908 config.go:182] Loaded profile config "embed-certs-349782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:40:45.391343 1074908 addons.go:238] Setting addon metrics-server=true in "embed-certs-349782"
	W0127 15:40:45.391353 1074908 addons.go:247] addon metrics-server should already be in state true
	W0127 15:40:45.391330 1074908 addons.go:247] addon dashboard should already be in state true
	W0127 15:40:45.391338 1074908 addons.go:247] addon storage-provisioner should already be in state true
	I0127 15:40:45.391406 1074908 host.go:66] Checking if "embed-certs-349782" exists ...
	I0127 15:40:45.391417 1074908 host.go:66] Checking if "embed-certs-349782" exists ...
	I0127 15:40:45.391276 1074908 addons.go:69] Setting default-storageclass=true in profile "embed-certs-349782"
	I0127 15:40:45.391503 1074908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-349782"
	I0127 15:40:45.391386 1074908 host.go:66] Checking if "embed-certs-349782" exists ...
	I0127 15:40:45.391836 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.391838 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.391876 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.391925 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.391951 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.391954 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.391982 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.392044 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.396751 1074908 out.go:177] * Verifying Kubernetes components...
	I0127 15:40:45.398763 1074908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:40:45.411089 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33133
	I0127 15:40:45.411341 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0127 15:40:45.411740 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.411839 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.412321 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.412348 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.412429 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45519
	I0127 15:40:45.412455 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.412471 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.412710 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.412921 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.413145 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34947
	I0127 15:40:45.413359 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.413399 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.413439 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.413451 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.413623 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.413854 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.413991 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.414216 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.414233 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.414273 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.414298 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.414583 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.414766 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.414772 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.414845 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.418728 1074908 addons.go:238] Setting addon default-storageclass=true in "embed-certs-349782"
	W0127 15:40:45.418755 1074908 addons.go:247] addon default-storageclass should already be in state true
	I0127 15:40:45.418787 1074908 host.go:66] Checking if "embed-certs-349782" exists ...
	I0127 15:40:45.419153 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.419189 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.436563 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41273
	I0127 15:40:45.437032 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.437309 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44033
	I0127 15:40:45.437764 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.437783 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.437859 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38101
	I0127 15:40:45.437986 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.438180 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.438423 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.438439 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.438503 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.438549 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.439042 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.439059 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.439120 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.439496 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.439564 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.440296 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.440349 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.440835 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:40:45.441539 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41231
	I0127 15:40:45.442136 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.442687 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:40:45.443524 1074908 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 15:40:45.443584 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.443599 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.443863 1074908 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 15:40:45.443950 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.444664 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.445476 1074908 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 15:40:45.445498 1074908 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 15:40:45.445531 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:40:45.446460 1074908 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 15:40:45.446697 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:40:45.451306 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 15:40:45.456066 1074908 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 15:40:45.452788 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.456096 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:40:45.454144 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:40:45.456132 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:40:45.456169 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.456379 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:40:45.456396 1074908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:40:45.456519 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:40:45.456939 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:40:45.457981 1074908 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:40:45.458002 1074908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 15:40:45.458020 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:40:45.460172 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.460862 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:40:45.460921 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.461259 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:40:45.461487 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:40:45.461715 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:40:45.461874 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:40:45.462195 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.462273 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:40:45.462309 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.462659 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:40:45.462819 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:40:45.462924 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:40:45.463019 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:40:45.464793 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36397
	I0127 15:40:45.465301 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.465795 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.465815 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.468906 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.469208 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.471230 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:40:45.471522 1074908 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 15:40:45.471538 1074908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 15:40:45.471562 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:40:45.474700 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.475171 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:40:45.475203 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.475388 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:40:45.475596 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:40:45.475722 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:40:45.475899 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:40:45.617662 1074908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:40:45.639438 1074908 node_ready.go:35] waiting up to 6m0s for node "embed-certs-349782" to be "Ready" ...
	I0127 15:40:45.668405 1074908 node_ready.go:49] node "embed-certs-349782" has status "Ready":"True"
	I0127 15:40:45.668432 1074908 node_ready.go:38] duration metric: took 28.956722ms for node "embed-certs-349782" to be "Ready" ...
	I0127 15:40:45.668451 1074908 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:45.676760 1074908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:45.743936 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 15:40:45.743967 1074908 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 15:40:45.755731 1074908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:40:45.759201 1074908 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 15:40:45.759233 1074908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 15:40:45.772228 1074908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 15:40:45.805739 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 15:40:45.805773 1074908 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 15:40:45.823459 1074908 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 15:40:45.823500 1074908 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 15:40:45.854823 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 15:40:45.854859 1074908 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 15:40:45.891284 1074908 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:40:45.891327 1074908 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 15:40:45.931396 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 15:40:45.931431 1074908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 15:40:46.015320 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 15:40:46.015360 1074908 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 15:40:46.015364 1074908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:40:46.083527 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 15:40:46.083563 1074908 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 15:40:46.246566 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 15:40:46.246597 1074908 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 15:40:46.376290 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 15:40:46.376329 1074908 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 15:40:46.427597 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:40:46.427631 1074908 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 15:40:46.482003 1074908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:40:47.410166 1074908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.637893772s)
	I0127 15:40:47.410259 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.410166 1074908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.654370109s)
	I0127 15:40:47.410282 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.410349 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.410372 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.410843 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.410875 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.412611 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.412628 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.412638 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.412646 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.412761 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.412798 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.412830 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.412850 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.412903 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.413172 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.413266 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.413342 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.414418 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.414437 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.474683 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.474722 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.475077 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.475151 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.475172 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.777164 1074908 pod_ready.go:103] pod "etcd-embed-certs-349782" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:47.977107 1074908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.961691521s)
	I0127 15:40:47.977187 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.977203 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.977515 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.977556 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.977595 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.977608 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.977619 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.977883 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.977933 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.977955 1074908 addons.go:479] Verifying addon metrics-server=true in "embed-certs-349782"
	I0127 15:40:47.977965 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:49.266293 1074908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.7842336s)
	I0127 15:40:49.266371 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:49.266386 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:49.266731 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:49.266754 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:49.266771 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:49.266779 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:49.267033 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:49.267086 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:49.267106 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:49.268778 1074908 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-349782 addons enable metrics-server
	
	I0127 15:40:49.270188 1074908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 15:40:49.271495 1074908 addons.go:514] duration metric: took 3.880366443s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 15:40:50.196894 1074908 pod_ready.go:103] pod "etcd-embed-certs-349782" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:51.684593 1074908 pod_ready.go:93] pod "etcd-embed-certs-349782" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:51.684619 1074908 pod_ready.go:82] duration metric: took 6.007831808s for pod "etcd-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:51.684632 1074908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:51.693065 1074908 pod_ready.go:93] pod "kube-apiserver-embed-certs-349782" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:51.693095 1074908 pod_ready.go:82] duration metric: took 8.4536ms for pod "kube-apiserver-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:51.693110 1074908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:52.703593 1074908 pod_ready.go:93] pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:52.703626 1074908 pod_ready.go:82] duration metric: took 1.010507584s for pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:52.703641 1074908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:53.710652 1074908 pod_ready.go:93] pod "kube-scheduler-embed-certs-349782" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:53.710683 1074908 pod_ready.go:82] duration metric: took 1.007031836s for pod "kube-scheduler-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:53.710695 1074908 pod_ready.go:39] duration metric: took 8.042232456s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:53.710716 1074908 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:40:53.710780 1074908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:53.771554 1074908 api_server.go:72] duration metric: took 8.380427434s to wait for apiserver process to appear ...
	I0127 15:40:53.771585 1074908 api_server.go:88] waiting for apiserver healthz status ...
	I0127 15:40:53.771611 1074908 api_server.go:253] Checking apiserver healthz at https://192.168.61.43:8443/healthz ...
	I0127 15:40:53.779085 1074908 api_server.go:279] https://192.168.61.43:8443/healthz returned 200:
	ok
	I0127 15:40:53.780297 1074908 api_server.go:141] control plane version: v1.32.1
	I0127 15:40:53.780325 1074908 api_server.go:131] duration metric: took 8.731633ms to wait for apiserver health ...
	I0127 15:40:53.780335 1074908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 15:40:53.788343 1074908 system_pods.go:59] 9 kube-system pods found
	I0127 15:40:53.788373 1074908 system_pods.go:61] "coredns-668d6bf9bc-2ggkc" [ae4bf072-7cfb-4a26-8c71-abd3cbc52c28] Running
	I0127 15:40:53.788380 1074908 system_pods.go:61] "coredns-668d6bf9bc-h92kp" [5c29333b-4ea9-44fa-8be6-c350e6b709fe] Running
	I0127 15:40:53.788384 1074908 system_pods.go:61] "etcd-embed-certs-349782" [fcb552ae-bb9e-49de-a183-a26f8cac7e56] Running
	I0127 15:40:53.788388 1074908 system_pods.go:61] "kube-apiserver-embed-certs-349782" [5161cdd2-9cea-4b6d-9023-b20f56e14d9c] Running
	I0127 15:40:53.788392 1074908 system_pods.go:61] "kube-controller-manager-embed-certs-349782" [defbaf3b-e25a-4e20-a602-4be47bd2cc4b] Running
	I0127 15:40:53.788395 1074908 system_pods.go:61] "kube-proxy-vhpzl" [1bb477a3-24b0-4a0e-9bf1-ce5794d2cdbf] Running
	I0127 15:40:53.788398 1074908 system_pods.go:61] "kube-scheduler-embed-certs-349782" [ed785153-6f53-4289-a191-5545960c300f] Running
	I0127 15:40:53.788404 1074908 system_pods.go:61] "metrics-server-f79f97bbb-pnbcx" [af453586-d131-4ba7-aa9f-290eb044d58e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 15:40:53.788411 1074908 system_pods.go:61] "storage-provisioner" [e5c6e59a-52ab-4707-a438-5d01890928db] Running
	I0127 15:40:53.788422 1074908 system_pods.go:74] duration metric: took 8.079129ms to wait for pod list to return data ...
	I0127 15:40:53.788430 1074908 default_sa.go:34] waiting for default service account to be created ...
	I0127 15:40:53.791640 1074908 default_sa.go:45] found service account: "default"
	I0127 15:40:53.791671 1074908 default_sa.go:55] duration metric: took 3.229036ms for default service account to be created ...
	I0127 15:40:53.791682 1074908 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 15:40:53.798897 1074908 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-349782 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-349782 -n embed-certs-349782
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-349782 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-349782 logs -n 25: (1.608548046s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-147179 | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | disable-driver-mounts-147179                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:33 UTC |
	|         | default-k8s-diff-port-912913                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-458006             | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-458006                                   | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-349782            | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-349782                                  | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:35 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-912913  | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC | 27 Jan 25 15:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC | 27 Jan 25 15:35 UTC |
	|         | default-k8s-diff-port-912913                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-458006                  | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC | 27 Jan 25 15:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-458006                                   | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-349782                 | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC | 27 Jan 25 15:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-349782                                  | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-912913       | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC | 27 Jan 25 15:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC |                     |
	|         | default-k8s-diff-port-912913                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-405706        | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:36 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-405706                              | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:37 UTC | 27 Jan 25 15:37 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-405706             | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:37 UTC | 27 Jan 25 15:37 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-405706                              | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-405706                              | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 16:01 UTC | 27 Jan 25 16:01 UTC |
	| start   | -p newest-cni-964010 --memory=2200 --alsologtostderr   | newest-cni-964010            | jenkins | v1.35.0 | 27 Jan 25 16:01 UTC | 27 Jan 25 16:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-458006                                   | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 16:01 UTC | 27 Jan 25 16:01 UTC |
	| addons  | enable metrics-server -p newest-cni-964010             | newest-cni-964010            | jenkins | v1.35.0 | 27 Jan 25 16:02 UTC | 27 Jan 25 16:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-964010                                   | newest-cni-964010            | jenkins | v1.35.0 | 27 Jan 25 16:02 UTC | 27 Jan 25 16:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-964010                  | newest-cni-964010            | jenkins | v1.35.0 | 27 Jan 25 16:02 UTC | 27 Jan 25 16:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-964010 --memory=2200 --alsologtostderr   | newest-cni-964010            | jenkins | v1.35.0 | 27 Jan 25 16:02 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 16:02:19
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 16:02:19.261377 1082222 out.go:345] Setting OutFile to fd 1 ...
	I0127 16:02:19.261477 1082222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 16:02:19.261482 1082222 out.go:358] Setting ErrFile to fd 2...
	I0127 16:02:19.261486 1082222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 16:02:19.261686 1082222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 16:02:19.262260 1082222 out.go:352] Setting JSON to false
	I0127 16:02:19.263221 1082222 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":24286,"bootTime":1737969453,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 16:02:19.263348 1082222 start.go:139] virtualization: kvm guest
	I0127 16:02:19.265748 1082222 out.go:177] * [newest-cni-964010] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 16:02:19.267453 1082222 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 16:02:19.267449 1082222 notify.go:220] Checking for updates...
	I0127 16:02:19.270796 1082222 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 16:02:19.272103 1082222 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 16:02:19.273540 1082222 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 16:02:19.274961 1082222 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 16:02:19.276419 1082222 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 16:02:19.278185 1082222 config.go:182] Loaded profile config "newest-cni-964010": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 16:02:19.278753 1082222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 16:02:19.278849 1082222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 16:02:19.294966 1082222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45475
	I0127 16:02:19.295506 1082222 main.go:141] libmachine: () Calling .GetVersion
	I0127 16:02:19.296105 1082222 main.go:141] libmachine: Using API Version  1
	I0127 16:02:19.296129 1082222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 16:02:19.296560 1082222 main.go:141] libmachine: () Calling .GetMachineName
	I0127 16:02:19.296757 1082222 main.go:141] libmachine: (newest-cni-964010) Calling .DriverName
	I0127 16:02:19.297053 1082222 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 16:02:19.297370 1082222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 16:02:19.297408 1082222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 16:02:19.313334 1082222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42991
	I0127 16:02:19.313859 1082222 main.go:141] libmachine: () Calling .GetVersion
	I0127 16:02:19.314470 1082222 main.go:141] libmachine: Using API Version  1
	I0127 16:02:19.314507 1082222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 16:02:19.314845 1082222 main.go:141] libmachine: () Calling .GetMachineName
	I0127 16:02:19.315090 1082222 main.go:141] libmachine: (newest-cni-964010) Calling .DriverName
	I0127 16:02:19.353432 1082222 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 16:02:19.354837 1082222 start.go:297] selected driver: kvm2
	I0127 16:02:19.354851 1082222 start.go:901] validating driver "kvm2" against &{Name:newest-cni-964010 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-964010 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.15 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 16:02:19.354970 1082222 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 16:02:19.355728 1082222 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 16:02:19.355814 1082222 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 16:02:19.372427 1082222 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 16:02:19.372827 1082222 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 16:02:19.372863 1082222 cni.go:84] Creating CNI manager for ""
	I0127 16:02:19.372912 1082222 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 16:02:19.372948 1082222 start.go:340] cluster config:
	{Name:newest-cni-964010 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-964010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.15 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 16:02:19.373113 1082222 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 16:02:19.375105 1082222 out.go:177] * Starting "newest-cni-964010" primary control-plane node in "newest-cni-964010" cluster
	I0127 16:02:19.376452 1082222 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 16:02:19.376494 1082222 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 16:02:19.376505 1082222 cache.go:56] Caching tarball of preloaded images
	I0127 16:02:19.376583 1082222 preload.go:172] Found /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 16:02:19.376593 1082222 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 16:02:19.376700 1082222 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/newest-cni-964010/config.json ...
	I0127 16:02:19.376878 1082222 start.go:360] acquireMachinesLock for newest-cni-964010: {Name:mk884e36253ca066a698970989f20649e5f9cbef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 16:02:19.376925 1082222 start.go:364] duration metric: took 28.939µs to acquireMachinesLock for "newest-cni-964010"
	I0127 16:02:19.376939 1082222 start.go:96] Skipping create...Using existing machine configuration
	I0127 16:02:19.376947 1082222 fix.go:54] fixHost starting: 
	I0127 16:02:19.377244 1082222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 16:02:19.377280 1082222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 16:02:19.393084 1082222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38449
	I0127 16:02:19.393572 1082222 main.go:141] libmachine: () Calling .GetVersion
	I0127 16:02:19.394171 1082222 main.go:141] libmachine: Using API Version  1
	I0127 16:02:19.394208 1082222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 16:02:19.394627 1082222 main.go:141] libmachine: () Calling .GetMachineName
	I0127 16:02:19.394887 1082222 main.go:141] libmachine: (newest-cni-964010) Calling .DriverName
	I0127 16:02:19.395168 1082222 main.go:141] libmachine: (newest-cni-964010) Calling .GetState
	I0127 16:02:19.396880 1082222 fix.go:112] recreateIfNeeded on newest-cni-964010: state=Stopped err=<nil>
	I0127 16:02:19.396918 1082222 main.go:141] libmachine: (newest-cni-964010) Calling .DriverName
	W0127 16:02:19.397142 1082222 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 16:02:19.400061 1082222 out.go:177] * Restarting existing kvm2 VM for "newest-cni-964010" ...
	
	
	==> CRI-O <==
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.081827500Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5b7551f9c53c13c952493bb7978ac70911b342604a1cdcfc288ec355a1d6ba91,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-7779f9b69b-gzkx9,Uid:61a7ea33-9eb4-4e71-8b3b-961db290ec8a,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737992449417491078,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-gzkx9,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 61a7ea33-9eb4-4e71-8b3b-961db290ec8a,k8s-app: kubernetes-dashboard,pod-template-hash: 7779f9b69b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T15:40:49.101446581Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ef17ea4a6ebaf08aad51c05e7bd91651cb9773822fee4d4fc6221a15c1381cc4,Metadata:&PodSandboxMetadata{Name
:dashboard-metrics-scraper-86c6bf9756-vghs7,Uid:7fa0a0e1-558d-4424-bea6-17f1f4631ec4,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737992449381315487,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-vghs7,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7fa0a0e1-558d-4424-bea6-17f1f4631ec4,k8s-app: dashboard-metrics-scraper,pod-template-hash: 86c6bf9756,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T15:40:49.072799350Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:ae5c7ad786c47a8f556031e5afa07361dfc3dab323d03d90881a2a44db2997c0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e5c6e59a-52ab-4707-a438-5d01890928db,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737992448047153020,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test:
storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c6e59a-52ab-4707-a438-5d01890928db,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-01-27T15:40:47.399940593Z,kubernetes.io/config.s
ource: api,},RuntimeHandler:,},&PodSandbox{Id:c3c2007aff34ea54fad39a4376c50e8a3d3af51375ed8c6d94b2d33acbaf624d,Metadata:&PodSandboxMetadata{Name:metrics-server-f79f97bbb-pnbcx,Uid:af453586-d131-4ba7-aa9f-290eb044d58e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737992447966399189,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-f79f97bbb-pnbcx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af453586-d131-4ba7-aa9f-290eb044d58e,k8s-app: metrics-server,pod-template-hash: f79f97bbb,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T15:40:47.639533826Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cc6fcded77e94fc48f225ec3f4c6051864f1551a60434012edb122be29a46146,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-2ggkc,Uid:ae4bf072-7cfb-4a26-8c71-abd3cbc52c28,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737992446664328825,Labels:map[string]string{io.kubernetes.container
.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-2ggkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4bf072-7cfb-4a26-8c71-abd3cbc52c28,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T15:40:46.354000809Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4a3b17fd164ff1d74c5eba8b368133a1988d318e4a4269fe6000750b1c2e316b,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-h92kp,Uid:5c29333b-4ea9-44fa-8be6-c350e6b709fe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737992446618695190,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-h92kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c29333b-4ea9-44fa-8be6-c350e6b709fe,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T15:40:46.305127839Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSand
box{Id:b18c83f0638f9186225925507b76be271bc774ecd4dce6e161cdb5ae219a98fe,Metadata:&PodSandboxMetadata{Name:kube-proxy-vhpzl,Uid:1bb477a3-24b0-4a0e-9bf1-ce5794d2cdbf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737992446523795255,Labels:map[string]string{controller-revision-hash: 566d7b9f85,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vhpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bb477a3-24b0-4a0e-9bf1-ce5794d2cdbf,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T15:40:45.889615690Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e37a38b4fbffcf558662fc4e701cab8ba198338a23d5deda18b400124509f6d9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-349782,Uid:04e9c6cdfebe36948f59b8e5527a333f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737992435699644845,Labels:map[string]string{component: kube-controller-manager,io.kuberne
tes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e9c6cdfebe36948f59b8e5527a333f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 04e9c6cdfebe36948f59b8e5527a333f,kubernetes.io/config.seen: 2025-01-27T15:40:35.234490842Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f596b0d86612cad67f9f467852f282ef7ed752449bf4fedaff58677dae17264d,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-349782,Uid:8cd42efc71ab2b305d629147420b778d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737992435697360986,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd42efc71ab2b305d629147420b778d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.43:2379,kube
rnetes.io/config.hash: 8cd42efc71ab2b305d629147420b778d,kubernetes.io/config.seen: 2025-01-27T15:40:35.234495359Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:00891ab590d0919ff78c2ebabb88dcbe70517989dd6afa140f6f0cbbfc409f68,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-349782,Uid:94101554b5dde7ab5cdfbcf5d2548925,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737992435694245597,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94101554b5dde7ab5cdfbcf5d2548925,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 94101554b5dde7ab5cdfbcf5d2548925,kubernetes.io/config.seen: 2025-01-27T15:40:35.234493577Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:34ed19a0fc0615891367034d80e6864155a5e7330aa77824a2cef45e321705f5,Metadata:&PodSandboxMetadata{Name:kube-apis
erver-embed-certs-349782,Uid:84c7798c196cb226149df4ac254fd251,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1737992435675206380,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84c7798c196cb226149df4ac254fd251,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.43:8443,kubernetes.io/config.hash: 84c7798c196cb226149df4ac254fd251,kubernetes.io/config.seen: 2025-01-27T15:40:35.234485341Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:21c74108acb180897ab9229d7b974482e7042231df8d6197f27c1fa012832cb7,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-349782,Uid:84c7798c196cb226149df4ac254fd251,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1737992144877053778,Labels:map[string]string{component: kube-apiserver,io.kube
rnetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84c7798c196cb226149df4ac254fd251,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.43:8443,kubernetes.io/config.hash: 84c7798c196cb226149df4ac254fd251,kubernetes.io/config.seen: 2025-01-27T15:35:44.413190024Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f0f49d8d-74c0-4cf9-ae41-243639ed28ff name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.082681223Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3acca8fd-baf7-4ac0-b48d-cbceed6d28ac name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.082729057Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3acca8fd-baf7-4ac0-b48d-cbceed6d28ac name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.083287842Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ac9a5e76671023b9f02e1e7ad27160f196df208feaba41abbb865f09958754b,PodSandboxId:ef17ea4a6ebaf08aad51c05e7bd91651cb9773822fee4d4fc6221a15c1381cc4,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737993709362952365,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-vghs7,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7fa0a0e1-558d-4424-bea6-17f1f4631ec4,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b96173d05b9172a9073a4d08ea473133e185cf87e311645d71092c642ea9e5a,PodSandboxId:5b7551f9c53c13c952493bb7978ac70911b342604a1cdcfc288ec355a1d6ba91,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737992460333055701,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-gzkx9,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: 61a7ea33-9eb4-4e71-8b3b-961db290ec8a,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9929bb105f4665f0dae261cf1aab426cf88d08bfaa36edf134d42b4f19e6a64e,PodSandboxId:ae5c7ad786c47a8f556031e5afa07361dfc3dab323d03d90881a2a44db2997c0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737992448721170673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c6e59a-52ab-4707-a438-5d01890928db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fe90768226af856889655b0bda823a11c36f6c1e1649d780c60f85fe9a29b4,PodSandboxId:4a3b17fd164ff1d74c5eba8b368133a1988d318e4a4269fe6000750b1c2e316b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992448160644238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h92kp,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 5c29333b-4ea9-44fa-8be6-c350e6b709fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c163c090948e5ab914bed6424a206cd877377e7c86a9da9a703b9860bc06f6f,PodSandboxId:cc6fcded77e94fc48f225ec3f4c6051864f1551a60434012edb122be29a46146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992447982633955,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2ggkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4bf072-7cfb-4a26-8c71-abd3cbc52c28,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da55589e48a857b4b3272a5615cfd1477f5223b0919f4508db4d324be379e95,PodSandboxId:b18c83f0638f9186225925507b76be271bc774ecd4dce6e161cdb5ae219a98fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737992446947256384,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bb477a3-24b0-4a0e-9bf1-ce5794d2cdbf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37018b59d64bc24257ad0a4fed116b8eace0dfe80d5b9991d550682c3e3f9c1f,PodSandboxId:e37a38b4fbffcf558662fc4e701cab8ba198338a23d5deda18b400124509f6d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da
055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737992435949385083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e9c6cdfebe36948f59b8e5527a333f,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27becac7ddc5657ee7b894fd5f835e0a66b4b19576fd222ac556594ef6bac6d1,PodSandboxId:f596b0d86612cad67f9f467852f282ef7ed752449bf4fedaff58677dae17264d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb
862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737992435894568263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd42efc71ab2b305d629147420b778d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af27d1a8a4a28adafec770f6d5e2161dfe831670b683fa17b63bf24df086ec48,PodSandboxId:00891ab590d0919ff78c2ebabb88dcbe70517989dd6afa140f6f0cbbfc409f68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdb
f1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737992435931663472,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94101554b5dde7ab5cdfbcf5d2548925,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3ba79573e806f9ffa89c367d098a137795798b870d41df8d8990ad3035fc397,PodSandboxId:34ed19a0fc0615891367034d80e6864155a5e7330aa77824a2cef45e321705f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737992435866514198,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84c7798c196cb226149df4ac254fd251,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53cadc0e1be60a4969ddccee339fadb83ee4a90744180ccbbf3ccff9e067886c,PodSandboxId:21c74108acb180897ab9229d7b974482e7042231df8d6197f27c1fa012832cb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737992145119319307,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84c7798c196cb226149df4ac254fd251,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3acca8fd-baf7-4ac0-b48d-cbceed6d28ac name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.121985652Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a926657d-e75e-49bf-86eb-faee70c73b41 name=/runtime.v1.RuntimeService/Version
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.122056722Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a926657d-e75e-49bf-86eb-faee70c73b41 name=/runtime.v1.RuntimeService/Version
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.123394270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0bc7275d-f8c2-456a-b8f4-7cdd12e9b75c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.123803129Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993741123781914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0bc7275d-f8c2-456a-b8f4-7cdd12e9b75c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.124366284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5d900e1-b6a0-417b-89f6-5b4cf214a86a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.124423742Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5d900e1-b6a0-417b-89f6-5b4cf214a86a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.124652580Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ac9a5e76671023b9f02e1e7ad27160f196df208feaba41abbb865f09958754b,PodSandboxId:ef17ea4a6ebaf08aad51c05e7bd91651cb9773822fee4d4fc6221a15c1381cc4,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737993709362952365,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-vghs7,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7fa0a0e1-558d-4424-bea6-17f1f4631ec4,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b96173d05b9172a9073a4d08ea473133e185cf87e311645d71092c642ea9e5a,PodSandboxId:5b7551f9c53c13c952493bb7978ac70911b342604a1cdcfc288ec355a1d6ba91,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737992460333055701,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-gzkx9,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: 61a7ea33-9eb4-4e71-8b3b-961db290ec8a,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9929bb105f4665f0dae261cf1aab426cf88d08bfaa36edf134d42b4f19e6a64e,PodSandboxId:ae5c7ad786c47a8f556031e5afa07361dfc3dab323d03d90881a2a44db2997c0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737992448721170673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c6e59a-52ab-4707-a438-5d01890928db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fe90768226af856889655b0bda823a11c36f6c1e1649d780c60f85fe9a29b4,PodSandboxId:4a3b17fd164ff1d74c5eba8b368133a1988d318e4a4269fe6000750b1c2e316b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992448160644238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h92kp,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 5c29333b-4ea9-44fa-8be6-c350e6b709fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c163c090948e5ab914bed6424a206cd877377e7c86a9da9a703b9860bc06f6f,PodSandboxId:cc6fcded77e94fc48f225ec3f4c6051864f1551a60434012edb122be29a46146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992447982633955,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2ggkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4bf072-7cfb-4a26-8c71-abd3cbc52c28,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da55589e48a857b4b3272a5615cfd1477f5223b0919f4508db4d324be379e95,PodSandboxId:b18c83f0638f9186225925507b76be271bc774ecd4dce6e161cdb5ae219a98fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737992446947256384,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bb477a3-24b0-4a0e-9bf1-ce5794d2cdbf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37018b59d64bc24257ad0a4fed116b8eace0dfe80d5b9991d550682c3e3f9c1f,PodSandboxId:e37a38b4fbffcf558662fc4e701cab8ba198338a23d5deda18b400124509f6d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da
055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737992435949385083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e9c6cdfebe36948f59b8e5527a333f,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27becac7ddc5657ee7b894fd5f835e0a66b4b19576fd222ac556594ef6bac6d1,PodSandboxId:f596b0d86612cad67f9f467852f282ef7ed752449bf4fedaff58677dae17264d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb
862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737992435894568263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd42efc71ab2b305d629147420b778d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af27d1a8a4a28adafec770f6d5e2161dfe831670b683fa17b63bf24df086ec48,PodSandboxId:00891ab590d0919ff78c2ebabb88dcbe70517989dd6afa140f6f0cbbfc409f68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdb
f1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737992435931663472,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94101554b5dde7ab5cdfbcf5d2548925,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3ba79573e806f9ffa89c367d098a137795798b870d41df8d8990ad3035fc397,PodSandboxId:34ed19a0fc0615891367034d80e6864155a5e7330aa77824a2cef45e321705f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737992435866514198,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84c7798c196cb226149df4ac254fd251,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53cadc0e1be60a4969ddccee339fadb83ee4a90744180ccbbf3ccff9e067886c,PodSandboxId:21c74108acb180897ab9229d7b974482e7042231df8d6197f27c1fa012832cb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737992145119319307,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84c7798c196cb226149df4ac254fd251,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a5d900e1-b6a0-417b-89f6-5b4cf214a86a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.175294812Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cffeaa21-bfc9-4d61-8a27-babb581450ff name=/runtime.v1.RuntimeService/Version
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.175393737Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cffeaa21-bfc9-4d61-8a27-babb581450ff name=/runtime.v1.RuntimeService/Version
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.177250053Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66ac6eba-edba-4bc2-8987-9196ed3d3206 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.177737699Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993741177713645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66ac6eba-edba-4bc2-8987-9196ed3d3206 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.178312399Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=230bc6dc-5d95-4e5c-9dac-bbf98dc25c7b name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.178459803Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=230bc6dc-5d95-4e5c-9dac-bbf98dc25c7b name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.178950841Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ac9a5e76671023b9f02e1e7ad27160f196df208feaba41abbb865f09958754b,PodSandboxId:ef17ea4a6ebaf08aad51c05e7bd91651cb9773822fee4d4fc6221a15c1381cc4,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737993709362952365,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-vghs7,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7fa0a0e1-558d-4424-bea6-17f1f4631ec4,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b96173d05b9172a9073a4d08ea473133e185cf87e311645d71092c642ea9e5a,PodSandboxId:5b7551f9c53c13c952493bb7978ac70911b342604a1cdcfc288ec355a1d6ba91,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737992460333055701,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-gzkx9,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: 61a7ea33-9eb4-4e71-8b3b-961db290ec8a,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9929bb105f4665f0dae261cf1aab426cf88d08bfaa36edf134d42b4f19e6a64e,PodSandboxId:ae5c7ad786c47a8f556031e5afa07361dfc3dab323d03d90881a2a44db2997c0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737992448721170673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c6e59a-52ab-4707-a438-5d01890928db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fe90768226af856889655b0bda823a11c36f6c1e1649d780c60f85fe9a29b4,PodSandboxId:4a3b17fd164ff1d74c5eba8b368133a1988d318e4a4269fe6000750b1c2e316b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992448160644238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h92kp,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 5c29333b-4ea9-44fa-8be6-c350e6b709fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c163c090948e5ab914bed6424a206cd877377e7c86a9da9a703b9860bc06f6f,PodSandboxId:cc6fcded77e94fc48f225ec3f4c6051864f1551a60434012edb122be29a46146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992447982633955,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2ggkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4bf072-7cfb-4a26-8c71-abd3cbc52c28,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da55589e48a857b4b3272a5615cfd1477f5223b0919f4508db4d324be379e95,PodSandboxId:b18c83f0638f9186225925507b76be271bc774ecd4dce6e161cdb5ae219a98fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737992446947256384,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bb477a3-24b0-4a0e-9bf1-ce5794d2cdbf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37018b59d64bc24257ad0a4fed116b8eace0dfe80d5b9991d550682c3e3f9c1f,PodSandboxId:e37a38b4fbffcf558662fc4e701cab8ba198338a23d5deda18b400124509f6d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da
055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737992435949385083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e9c6cdfebe36948f59b8e5527a333f,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27becac7ddc5657ee7b894fd5f835e0a66b4b19576fd222ac556594ef6bac6d1,PodSandboxId:f596b0d86612cad67f9f467852f282ef7ed752449bf4fedaff58677dae17264d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb
862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737992435894568263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd42efc71ab2b305d629147420b778d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af27d1a8a4a28adafec770f6d5e2161dfe831670b683fa17b63bf24df086ec48,PodSandboxId:00891ab590d0919ff78c2ebabb88dcbe70517989dd6afa140f6f0cbbfc409f68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdb
f1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737992435931663472,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94101554b5dde7ab5cdfbcf5d2548925,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3ba79573e806f9ffa89c367d098a137795798b870d41df8d8990ad3035fc397,PodSandboxId:34ed19a0fc0615891367034d80e6864155a5e7330aa77824a2cef45e321705f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737992435866514198,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84c7798c196cb226149df4ac254fd251,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53cadc0e1be60a4969ddccee339fadb83ee4a90744180ccbbf3ccff9e067886c,PodSandboxId:21c74108acb180897ab9229d7b974482e7042231df8d6197f27c1fa012832cb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737992145119319307,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84c7798c196cb226149df4ac254fd251,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=230bc6dc-5d95-4e5c-9dac-bbf98dc25c7b name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.225411969Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59fe7ca1-169b-432e-bfe1-edfb1d7aea97 name=/runtime.v1.RuntimeService/Version
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.225517451Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59fe7ca1-169b-432e-bfe1-edfb1d7aea97 name=/runtime.v1.RuntimeService/Version
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.227346052Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=faf08e2b-2ee5-4ac2-bbf5-f89c16a78f37 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.227998948Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993741227959960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=faf08e2b-2ee5-4ac2-bbf5-f89c16a78f37 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.228787526Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=472ffee4-0b9f-41b6-b46b-0383b7e8ea21 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.228886026Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=472ffee4-0b9f-41b6-b46b-0383b7e8ea21 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:21 embed-certs-349782 crio[728]: time="2025-01-27 16:02:21.229336430Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ac9a5e76671023b9f02e1e7ad27160f196df208feaba41abbb865f09958754b,PodSandboxId:ef17ea4a6ebaf08aad51c05e7bd91651cb9773822fee4d4fc6221a15c1381cc4,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737993709362952365,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-vghs7,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7fa0a0e1-558d-4424-bea6-17f1f4631ec4,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b96173d05b9172a9073a4d08ea473133e185cf87e311645d71092c642ea9e5a,PodSandboxId:5b7551f9c53c13c952493bb7978ac70911b342604a1cdcfc288ec355a1d6ba91,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737992460333055701,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-gzkx9,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: 61a7ea33-9eb4-4e71-8b3b-961db290ec8a,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9929bb105f4665f0dae261cf1aab426cf88d08bfaa36edf134d42b4f19e6a64e,PodSandboxId:ae5c7ad786c47a8f556031e5afa07361dfc3dab323d03d90881a2a44db2997c0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737992448721170673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c6e59a-52ab-4707-a438-5d01890928db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fe90768226af856889655b0bda823a11c36f6c1e1649d780c60f85fe9a29b4,PodSandboxId:4a3b17fd164ff1d74c5eba8b368133a1988d318e4a4269fe6000750b1c2e316b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992448160644238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-h92kp,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 5c29333b-4ea9-44fa-8be6-c350e6b709fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c163c090948e5ab914bed6424a206cd877377e7c86a9da9a703b9860bc06f6f,PodSandboxId:cc6fcded77e94fc48f225ec3f4c6051864f1551a60434012edb122be29a46146,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992447982633955,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2ggkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae4bf072-7cfb-4a26-8c71-abd3cbc52c28,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4da55589e48a857b4b3272a5615cfd1477f5223b0919f4508db4d324be379e95,PodSandboxId:b18c83f0638f9186225925507b76be271bc774ecd4dce6e161cdb5ae219a98fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737992446947256384,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhpzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bb477a3-24b0-4a0e-9bf1-ce5794d2cdbf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37018b59d64bc24257ad0a4fed116b8eace0dfe80d5b9991d550682c3e3f9c1f,PodSandboxId:e37a38b4fbffcf558662fc4e701cab8ba198338a23d5deda18b400124509f6d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da
055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737992435949385083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04e9c6cdfebe36948f59b8e5527a333f,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27becac7ddc5657ee7b894fd5f835e0a66b4b19576fd222ac556594ef6bac6d1,PodSandboxId:f596b0d86612cad67f9f467852f282ef7ed752449bf4fedaff58677dae17264d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb
862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737992435894568263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd42efc71ab2b305d629147420b778d,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af27d1a8a4a28adafec770f6d5e2161dfe831670b683fa17b63bf24df086ec48,PodSandboxId:00891ab590d0919ff78c2ebabb88dcbe70517989dd6afa140f6f0cbbfc409f68,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdb
f1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737992435931663472,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94101554b5dde7ab5cdfbcf5d2548925,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3ba79573e806f9ffa89c367d098a137795798b870d41df8d8990ad3035fc397,PodSandboxId:34ed19a0fc0615891367034d80e6864155a5e7330aa77824a2cef45e321705f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737992435866514198,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84c7798c196cb226149df4ac254fd251,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53cadc0e1be60a4969ddccee339fadb83ee4a90744180ccbbf3ccff9e067886c,PodSandboxId:21c74108acb180897ab9229d7b974482e7042231df8d6197f27c1fa012832cb7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737992145119319307,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-349782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84c7798c196cb226149df4ac254fd251,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=472ffee4-0b9f-41b6-b46b-0383b7e8ea21 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	5ac9a5e766710       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago      Exited              dashboard-metrics-scraper   9                   ef17ea4a6ebaf       dashboard-metrics-scraper-86c6bf9756-vghs7
	6b96173d05b91       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   21 minutes ago      Running             kubernetes-dashboard        0                   5b7551f9c53c1       kubernetes-dashboard-7779f9b69b-gzkx9
	9929bb105f466       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 minutes ago      Running             storage-provisioner         0                   ae5c7ad786c47       storage-provisioner
	11fe90768226a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   4a3b17fd164ff       coredns-668d6bf9bc-h92kp
	7c163c090948e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   cc6fcded77e94       coredns-668d6bf9bc-2ggkc
	4da55589e48a8       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                           21 minutes ago      Running             kube-proxy                  0                   b18c83f0638f9       kube-proxy-vhpzl
	37018b59d64bc       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                           21 minutes ago      Running             kube-controller-manager     2                   e37a38b4fbffc       kube-controller-manager-embed-certs-349782
	af27d1a8a4a28       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                           21 minutes ago      Running             kube-scheduler              2                   00891ab590d09       kube-scheduler-embed-certs-349782
	27becac7ddc56       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           21 minutes ago      Running             etcd                        2                   f596b0d86612c       etcd-embed-certs-349782
	a3ba79573e806       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           21 minutes ago      Running             kube-apiserver              2                   34ed19a0fc061       kube-apiserver-embed-certs-349782
	53cadc0e1be60       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           26 minutes ago      Exited              kube-apiserver              1                   21c74108acb18       kube-apiserver-embed-certs-349782
	
	
	==> coredns [11fe90768226af856889655b0bda823a11c36f6c1e1649d780c60f85fe9a29b4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [7c163c090948e5ab914bed6424a206cd877377e7c86a9da9a703b9860bc06f6f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-349782
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-349782
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743
	                    minikube.k8s.io/name=embed-certs-349782
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T15_40_42_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 15:40:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-349782
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 16:02:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 15:57:33 +0000   Mon, 27 Jan 2025 15:40:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 15:57:33 +0000   Mon, 27 Jan 2025 15:40:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 15:57:33 +0000   Mon, 27 Jan 2025 15:40:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 15:57:33 +0000   Mon, 27 Jan 2025 15:40:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.43
	  Hostname:    embed-certs-349782
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5cfeb9a1a3de4866aff5dc366798769a
	  System UUID:                5cfeb9a1-a3de-4866-aff5-dc366798769a
	  Boot ID:                    ea4f4fba-9281-4a48-bc13-892689001b7d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-2ggkc                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-h92kp                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-embed-certs-349782                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-embed-certs-349782             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-embed-certs-349782    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-vhpzl                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-349782             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-pnbcx                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-vghs7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-gzkx9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-349782 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-349782 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-349782 status is now: NodeHasSufficientPID
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-349782 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-349782 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-349782 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-349782 event: Registered Node embed-certs-349782 in Controller
	  Normal  CIDRAssignmentFailed     21m                cidrAllocator    Node embed-certs-349782 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +0.042901] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.072299] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.896822] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.611996] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.266040] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.058564] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078251] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.187533] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.168284] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +0.335248] systemd-fstab-generator[720]: Ignoring "noauto" option for root device
	[  +4.886220] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.063512] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.984065] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +5.608595] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.817799] kauditd_printk_skb: 96 callbacks suppressed
	[Jan27 15:40] systemd-fstab-generator[2721]: Ignoring "noauto" option for root device
	[  +0.084541] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.533491] systemd-fstab-generator[3061]: Ignoring "noauto" option for root device
	[  +0.101097] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.315002] systemd-fstab-generator[3171]: Ignoring "noauto" option for root device
	[  +1.256642] kauditd_printk_skb: 34 callbacks suppressed
	[  +7.700477] kauditd_printk_skb: 90 callbacks suppressed
	[  +5.944665] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [27becac7ddc5657ee7b894fd5f835e0a66b4b19576fd222ac556594ef6bac6d1] <==
	{"level":"warn","ts":"2025-01-27T16:01:50.628293Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.235062ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T16:01:50.628324Z","caller":"traceutil/trace.go:171","msg":"trace[856521009] range","detail":"{range_begin:/registry/ingress/; range_end:/registry/ingress0; response_count:0; response_revision:1693; }","duration":"299.313565ms","start":"2025-01-27T16:01:50.328995Z","end":"2025-01-27T16:01:50.628309Z","steps":["trace[856521009] 'count revisions from in-memory index tree'  (duration: 299.182503ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T16:01:50.628423Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.280646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.61.43\" limit:1 ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2025-01-27T16:01:50.628449Z","caller":"traceutil/trace.go:171","msg":"trace[952838230] range","detail":"{range_begin:/registry/masterleases/192.168.61.43; range_end:; response_count:1; response_revision:1693; }","duration":"143.330861ms","start":"2025-01-27T16:01:50.485110Z","end":"2025-01-27T16:01:50.628441Z","steps":["trace[952838230] 'range keys from in-memory index tree'  (duration: 143.215018ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T16:01:50.628481Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.701712ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T16:01:50.628500Z","caller":"traceutil/trace.go:171","msg":"trace[708202476] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1693; }","duration":"259.753203ms","start":"2025-01-27T16:01:50.368741Z","end":"2025-01-27T16:01:50.628494Z","steps":["trace[708202476] 'range keys from in-memory index tree'  (duration: 259.643558ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T16:01:50.755331Z","caller":"traceutil/trace.go:171","msg":"trace[1502687643] linearizableReadLoop","detail":"{readStateIndex:1968; appliedIndex:1967; }","duration":"122.187745ms","start":"2025-01-27T16:01:50.633112Z","end":"2025-01-27T16:01:50.755300Z","steps":["trace[1502687643] 'read index received'  (duration: 121.935819ms)","trace[1502687643] 'applied index is now lower than readState.Index'  (duration: 251.019µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T16:01:50.755538Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.402871ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T16:01:50.755610Z","caller":"traceutil/trace.go:171","msg":"trace[1497294683] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1693; }","duration":"122.504682ms","start":"2025-01-27T16:01:50.633091Z","end":"2025-01-27T16:01:50.755596Z","steps":["trace[1497294683] 'agreement among raft nodes before linearized reading'  (duration: 122.383063ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T16:01:51.028690Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.541449ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8254698618103689559 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.43\" mod_revision:1684 > success:<request_put:<key:\"/registry/masterleases/192.168.61.43\" value_size:67 lease:8254698618103689556 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.43\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-27T16:01:51.028781Z","caller":"traceutil/trace.go:171","msg":"trace[1350759300] linearizableReadLoop","detail":"{readStateIndex:1969; appliedIndex:1968; }","duration":"260.85835ms","start":"2025-01-27T16:01:50.767914Z","end":"2025-01-27T16:01:51.028772Z","steps":["trace[1350759300] 'read index received'  (duration: 114.726544ms)","trace[1350759300] 'applied index is now lower than readState.Index'  (duration: 146.131006ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T16:01:51.028907Z","caller":"traceutil/trace.go:171","msg":"trace[1851316557] transaction","detail":"{read_only:false; response_revision:1694; number_of_response:1; }","duration":"272.48257ms","start":"2025-01-27T16:01:50.756347Z","end":"2025-01-27T16:01:51.028829Z","steps":["trace[1851316557] 'process raft request'  (duration: 126.442609ms)","trace[1851316557] 'compare'  (duration: 145.363704ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T16:01:51.029145Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.81913ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1119"}
	{"level":"info","ts":"2025-01-27T16:01:51.029736Z","caller":"traceutil/trace.go:171","msg":"trace[1356734174] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1694; }","duration":"113.43063ms","start":"2025-01-27T16:01:50.916281Z","end":"2025-01-27T16:01:51.029711Z","steps":["trace[1356734174] 'agreement among raft nodes before linearized reading'  (duration: 112.706893ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T16:01:51.029188Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"261.284344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T16:01:51.030095Z","caller":"traceutil/trace.go:171","msg":"trace[1631077596] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1694; }","duration":"262.263433ms","start":"2025-01-27T16:01:50.767819Z","end":"2025-01-27T16:01:51.030083Z","steps":["trace[1631077596] 'agreement among raft nodes before linearized reading'  (duration: 261.344432ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T16:01:51.479442Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.048127ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T16:01:51.479556Z","caller":"traceutil/trace.go:171","msg":"trace[1323487388] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1694; }","duration":"105.169174ms","start":"2025-01-27T16:01:51.374376Z","end":"2025-01-27T16:01:51.479545Z","steps":["trace[1323487388] 'range keys from in-memory index tree'  (duration: 105.039505ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T16:01:51.479565Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.322747ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8254698618103689566 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1691 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-27T16:01:51.479640Z","caller":"traceutil/trace.go:171","msg":"trace[63543688] linearizableReadLoop","detail":"{readStateIndex:1970; appliedIndex:1969; }","duration":"312.075003ms","start":"2025-01-27T16:01:51.167554Z","end":"2025-01-27T16:01:51.479629Z","steps":["trace[63543688] 'read index received'  (duration: 183.589898ms)","trace[63543688] 'applied index is now lower than readState.Index'  (duration: 128.484074ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T16:01:51.479709Z","caller":"traceutil/trace.go:171","msg":"trace[1494118462] transaction","detail":"{read_only:false; response_revision:1695; number_of_response:1; }","duration":"442.789641ms","start":"2025-01-27T16:01:51.036914Z","end":"2025-01-27T16:01:51.479703Z","steps":["trace[1494118462] 'process raft request'  (duration: 314.278359ms)","trace[1494118462] 'compare'  (duration: 128.125581ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T16:01:51.479771Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T16:01:51.036830Z","time spent":"442.913784ms","remote":"127.0.0.1:55250","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1691 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-01-27T16:01:51.480220Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"312.678155ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T16:01:51.480277Z","caller":"traceutil/trace.go:171","msg":"trace[1472906644] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1695; }","duration":"312.758ms","start":"2025-01-27T16:01:51.167513Z","end":"2025-01-27T16:01:51.480271Z","steps":["trace[1472906644] 'agreement among raft nodes before linearized reading'  (duration: 312.682802ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T16:01:51.480307Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T16:01:51.167498Z","time spent":"312.80379ms","remote":"127.0.0.1:55278","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	
	
	==> kernel <==
	 16:02:21 up 27 min,  0 users,  load average: 0.22, 0.25, 0.22
	Linux embed-certs-349782 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [53cadc0e1be60a4969ddccee339fadb83ee4a90744180ccbbf3ccff9e067886c] <==
	W0127 15:40:31.696330       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:31.728645       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:31.790346       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:31.816358       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:31.860299       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:31.864980       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:31.890156       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:31.996579       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:32.002729       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:32.042253       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:32.067612       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:32.158502       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:32.171270       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:32.171384       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:32.238308       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:32.296412       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:32.319235       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:32.359129       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:32.465215       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:32.468731       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:32.601923       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:32.610744       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:32.657288       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:32.727172       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:32.941219       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a3ba79573e806f9ffa89c367d098a137795798b870d41df8d8990ad3035fc397] <==
	I0127 15:58:39.558493       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 15:58:39.558587       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 16:00:38.555567       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 16:00:38.556036       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 16:00:39.558370       1 handler_proxy.go:99] no RequestInfo found in the context
	W0127 16:00:39.558454       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 16:00:39.558577       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0127 16:00:39.558651       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 16:00:39.559907       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 16:00:39.559985       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 16:01:39.560689       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 16:01:39.561098       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 16:01:39.560758       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 16:01:39.561313       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 16:01:39.562520       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 16:01:39.562622       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [37018b59d64bc24257ad0a4fed116b8eace0dfe80d5b9991d550682c3e3f9c1f] <==
	I0127 15:57:33.619532       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-349782"
	E0127 15:57:45.362587       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 15:57:45.441908       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 15:58:15.371609       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 15:58:15.450564       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 15:58:45.379003       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 15:58:45.459186       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 15:59:15.389012       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 15:59:15.466913       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 15:59:45.395541       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 15:59:45.474559       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 16:00:15.404273       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 16:00:15.483566       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 16:00:45.410943       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 16:00:45.491717       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 16:01:15.417293       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 16:01:15.500708       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 16:01:45.428569       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 16:01:45.511183       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 16:01:50.232286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="100.277µs"
	I0127 16:01:51.779300       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="65.528µs"
	I0127 16:02:02.364352       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="93.991µs"
	E0127 16:02:15.435069       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 16:02:15.518708       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 16:02:16.372687       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="108.045µs"
	
	
	==> kube-proxy [4da55589e48a857b4b3272a5615cfd1477f5223b0919f4508db4d324be379e95] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 15:40:48.143668       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 15:40:48.162163       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.43"]
	E0127 15:40:48.162658       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 15:40:48.268972       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 15:40:48.269022       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 15:40:48.269048       1 server_linux.go:170] "Using iptables Proxier"
	I0127 15:40:48.273946       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 15:40:48.274266       1 server.go:497] "Version info" version="v1.32.1"
	I0127 15:40:48.274294       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 15:40:48.278780       1 config.go:199] "Starting service config controller"
	I0127 15:40:48.279000       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 15:40:48.279107       1 config.go:105] "Starting endpoint slice config controller"
	I0127 15:40:48.279113       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 15:40:48.284415       1 config.go:329] "Starting node config controller"
	I0127 15:40:48.284449       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 15:40:48.379379       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 15:40:48.379426       1 shared_informer.go:320] Caches are synced for service config
	I0127 15:40:48.385232       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [af27d1a8a4a28adafec770f6d5e2161dfe831670b683fa17b63bf24df086ec48] <==
	W0127 15:40:38.607617       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 15:40:38.607829       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:39.425052       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 15:40:39.425116       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:39.453704       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 15:40:39.453773       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:39.482307       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 15:40:39.482377       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:39.500629       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 15:40:39.500707       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:39.523912       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 15:40:39.523970       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:39.585504       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 15:40:39.585557       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:39.749409       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 15:40:39.749484       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 15:40:39.758059       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 15:40:39.758125       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:39.762826       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 15:40:39.764942       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:39.782363       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 15:40:39.782596       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:39.787518       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 15:40:39.787574       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 15:40:41.684359       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 16:01:41 embed-certs-349782 kubelet[3068]: E0127 16:01:41.919909    3068 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993701919126034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:01:47 embed-certs-349782 kubelet[3068]: E0127 16:01:47.389773    3068 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 16:01:47 embed-certs-349782 kubelet[3068]: E0127 16:01:47.389997    3068 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 16:01:47 embed-certs-349782 kubelet[3068]: E0127 16:01:47.390445    3068 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvcwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-pnbcx_kube-system(af453586-d131-4ba7-aa9f-290eb044d58e): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 27 16:01:47 embed-certs-349782 kubelet[3068]: E0127 16:01:47.392161    3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-pnbcx" podUID="af453586-d131-4ba7-aa9f-290eb044d58e"
	Jan 27 16:01:49 embed-certs-349782 kubelet[3068]: I0127 16:01:49.346672    3068 scope.go:117] "RemoveContainer" containerID="b65ebb467d8599e044b7e93ae790f14e962327d1a98371c5af3cbf3b4884f58c"
	Jan 27 16:01:49 embed-certs-349782 kubelet[3068]: I0127 16:01:49.943221    3068 scope.go:117] "RemoveContainer" containerID="b65ebb467d8599e044b7e93ae790f14e962327d1a98371c5af3cbf3b4884f58c"
	Jan 27 16:01:49 embed-certs-349782 kubelet[3068]: I0127 16:01:49.943565    3068 scope.go:117] "RemoveContainer" containerID="5ac9a5e76671023b9f02e1e7ad27160f196df208feaba41abbb865f09958754b"
	Jan 27 16:01:49 embed-certs-349782 kubelet[3068]: E0127 16:01:49.943738    3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-vghs7_kubernetes-dashboard(7fa0a0e1-558d-4424-bea6-17f1f4631ec4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-vghs7" podUID="7fa0a0e1-558d-4424-bea6-17f1f4631ec4"
	Jan 27 16:01:51 embed-certs-349782 kubelet[3068]: I0127 16:01:51.677916    3068 scope.go:117] "RemoveContainer" containerID="5ac9a5e76671023b9f02e1e7ad27160f196df208feaba41abbb865f09958754b"
	Jan 27 16:01:51 embed-certs-349782 kubelet[3068]: E0127 16:01:51.678149    3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-vghs7_kubernetes-dashboard(7fa0a0e1-558d-4424-bea6-17f1f4631ec4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-vghs7" podUID="7fa0a0e1-558d-4424-bea6-17f1f4631ec4"
	Jan 27 16:01:51 embed-certs-349782 kubelet[3068]: E0127 16:01:51.921770    3068 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993711921393150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:01:51 embed-certs-349782 kubelet[3068]: E0127 16:01:51.921831    3068 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993711921393150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:02:01 embed-certs-349782 kubelet[3068]: E0127 16:02:01.924905    3068 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993721923599962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:02:01 embed-certs-349782 kubelet[3068]: E0127 16:02:01.924973    3068 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993721923599962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:02:02 embed-certs-349782 kubelet[3068]: E0127 16:02:02.347604    3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-pnbcx" podUID="af453586-d131-4ba7-aa9f-290eb044d58e"
	Jan 27 16:02:04 embed-certs-349782 kubelet[3068]: I0127 16:02:04.345713    3068 scope.go:117] "RemoveContainer" containerID="5ac9a5e76671023b9f02e1e7ad27160f196df208feaba41abbb865f09958754b"
	Jan 27 16:02:04 embed-certs-349782 kubelet[3068]: E0127 16:02:04.345979    3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-vghs7_kubernetes-dashboard(7fa0a0e1-558d-4424-bea6-17f1f4631ec4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-vghs7" podUID="7fa0a0e1-558d-4424-bea6-17f1f4631ec4"
	Jan 27 16:02:11 embed-certs-349782 kubelet[3068]: E0127 16:02:11.926276    3068 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993731925960102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:02:11 embed-certs-349782 kubelet[3068]: E0127 16:02:11.926332    3068 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993731925960102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:02:16 embed-certs-349782 kubelet[3068]: E0127 16:02:16.347046    3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-pnbcx" podUID="af453586-d131-4ba7-aa9f-290eb044d58e"
	Jan 27 16:02:18 embed-certs-349782 kubelet[3068]: I0127 16:02:18.345515    3068 scope.go:117] "RemoveContainer" containerID="5ac9a5e76671023b9f02e1e7ad27160f196df208feaba41abbb865f09958754b"
	Jan 27 16:02:18 embed-certs-349782 kubelet[3068]: E0127 16:02:18.345823    3068 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-vghs7_kubernetes-dashboard(7fa0a0e1-558d-4424-bea6-17f1f4631ec4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-vghs7" podUID="7fa0a0e1-558d-4424-bea6-17f1f4631ec4"
	Jan 27 16:02:21 embed-certs-349782 kubelet[3068]: E0127 16:02:21.928813    3068 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993741928505793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:02:21 embed-certs-349782 kubelet[3068]: E0127 16:02:21.928907    3068 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993741928505793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [6b96173d05b9172a9073a4d08ea473133e185cf87e311645d71092c642ea9e5a] <==
	2025/01/27 15:50:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:50:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:51:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:51:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:52:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:52:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:53:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:53:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:54:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:54:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:55:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:55:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:56:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:56:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:57:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:57:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:58:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:58:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:59:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:59:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 16:00:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 16:00:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 16:01:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 16:01:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 16:02:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [9929bb105f4665f0dae261cf1aab426cf88d08bfaa36edf134d42b4f19e6a64e] <==
	I0127 15:40:48.893746       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 15:40:48.946478       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 15:40:48.947269       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 15:40:48.975975       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 15:40:48.977243       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-349782_0996a9ca-1c5b-4380-8b7c-61c5bbcecfe2!
	I0127 15:40:48.979787       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"479130f0-58d0-4012-98da-94115d9c1e64", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-349782_0996a9ca-1c5b-4380-8b7c-61c5bbcecfe2 became leader
	I0127 15:40:49.084966       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-349782_0996a9ca-1c5b-4380-8b7c-61c5bbcecfe2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-349782 -n embed-certs-349782
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-349782 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-pnbcx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-349782 describe pod metrics-server-f79f97bbb-pnbcx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-349782 describe pod metrics-server-f79f97bbb-pnbcx: exit status 1 (69.260799ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-pnbcx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-349782 describe pod metrics-server-f79f97bbb-pnbcx: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (1634.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1626.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-912913 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 15:35:37.479997 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:38.928388 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:38.934917 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:38.946423 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:38.967989 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:39.009469 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:39.091087 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:39.252449 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:39.573982 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:40.216097 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:41.497768 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:44.059628 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:49.181817 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:54.405890 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:57.962042 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:35:59.423224 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-912913 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: signal: killed (27m4.436179655s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-912913] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20321
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "default-k8s-diff-port-912913" primary control-plane node in "default-k8s-diff-port-912913" cluster
	* Restarting existing kvm2 VM for "default-k8s-diff-port-912913" ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-912913 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 15:35:32.465073 1075160 out.go:345] Setting OutFile to fd 1 ...
	I0127 15:35:32.465209 1075160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:35:32.465221 1075160 out.go:358] Setting ErrFile to fd 2...
	I0127 15:35:32.465229 1075160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:35:32.465441 1075160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 15:35:32.465997 1075160 out.go:352] Setting JSON to false
	I0127 15:35:32.467062 1075160 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":22679,"bootTime":1737969453,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 15:35:32.467163 1075160 start.go:139] virtualization: kvm guest
	I0127 15:35:32.469598 1075160 out.go:177] * [default-k8s-diff-port-912913] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 15:35:32.471186 1075160 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 15:35:32.471170 1075160 notify.go:220] Checking for updates...
	I0127 15:35:32.472636 1075160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 15:35:32.474097 1075160 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:35:32.475700 1075160 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:35:32.477182 1075160 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 15:35:32.478450 1075160 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 15:35:32.480299 1075160 config.go:182] Loaded profile config "default-k8s-diff-port-912913": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:35:32.480697 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:35:32.480775 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:35:32.497510 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42587
	I0127 15:35:32.497950 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:35:32.498541 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:35:32.498565 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:35:32.498936 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:35:32.499135 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:35:32.499407 1075160 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 15:35:32.499754 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:35:32.499792 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:35:32.514971 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41319
	I0127 15:35:32.515453 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:35:32.515951 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:35:32.515970 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:35:32.516242 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:35:32.516459 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:35:32.552909 1075160 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 15:35:32.554331 1075160 start.go:297] selected driver: kvm2
	I0127 15:35:32.554353 1075160 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-912913 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8
s-diff-port-912913 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:35:32.554494 1075160 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 15:35:32.555478 1075160 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:35:32.555599 1075160 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 15:35:32.571879 1075160 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 15:35:32.572274 1075160 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 15:35:32.572309 1075160 cni.go:84] Creating CNI manager for ""
	I0127 15:35:32.572361 1075160 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:35:32.572400 1075160 start.go:340] cluster config:
	{Name:default-k8s-diff-port-912913 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-912913 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:35:32.572511 1075160 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:35:32.575391 1075160 out.go:177] * Starting "default-k8s-diff-port-912913" primary control-plane node in "default-k8s-diff-port-912913" cluster
	I0127 15:35:32.576823 1075160 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 15:35:32.576870 1075160 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 15:35:32.576884 1075160 cache.go:56] Caching tarball of preloaded images
	I0127 15:35:32.576984 1075160 preload.go:172] Found /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 15:35:32.576998 1075160 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 15:35:32.577150 1075160 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/default-k8s-diff-port-912913/config.json ...
	I0127 15:35:32.577363 1075160 start.go:360] acquireMachinesLock for default-k8s-diff-port-912913: {Name:mk884e36253ca066a698970989f20649e5f9cbef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 15:35:35.018323 1075160 start.go:364] duration metric: took 2.440904854s to acquireMachinesLock for "default-k8s-diff-port-912913"
	I0127 15:35:35.018377 1075160 start.go:96] Skipping create...Using existing machine configuration
	I0127 15:35:35.018425 1075160 fix.go:54] fixHost starting: 
	I0127 15:35:35.018882 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:35:35.018930 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:35:35.036493 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40333
	I0127 15:35:35.036872 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:35:35.037377 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:35:35.037398 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:35:35.037726 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:35:35.037945 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:35:35.038119 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:35:35.039744 1075160 fix.go:112] recreateIfNeeded on default-k8s-diff-port-912913: state=Stopped err=<nil>
	I0127 15:35:35.039773 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	W0127 15:35:35.039935 1075160 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 15:35:35.042897 1075160 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-912913" ...
	I0127 15:35:35.044132 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Start
	I0127 15:35:35.044349 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) starting domain...
	I0127 15:35:35.044374 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) ensuring networks are active...
	I0127 15:35:35.045163 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Ensuring network default is active
	I0127 15:35:35.045582 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Ensuring network mk-default-k8s-diff-port-912913 is active
	I0127 15:35:35.046074 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) getting domain XML...
	I0127 15:35:35.046832 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) creating domain...
	I0127 15:35:36.353772 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) waiting for IP...
	I0127 15:35:36.354762 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:36.355353 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | unable to find current IP address of domain default-k8s-diff-port-912913 in network mk-default-k8s-diff-port-912913
	I0127 15:35:36.355462 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | I0127 15:35:36.355323 1075197 retry.go:31] will retry after 288.119903ms: waiting for domain to come up
	I0127 15:35:36.645074 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:36.645759 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | unable to find current IP address of domain default-k8s-diff-port-912913 in network mk-default-k8s-diff-port-912913
	I0127 15:35:36.645790 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | I0127 15:35:36.645705 1075197 retry.go:31] will retry after 387.53314ms: waiting for domain to come up
	I0127 15:35:37.035452 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:37.036155 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | unable to find current IP address of domain default-k8s-diff-port-912913 in network mk-default-k8s-diff-port-912913
	I0127 15:35:37.036194 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | I0127 15:35:37.036115 1075197 retry.go:31] will retry after 420.91968ms: waiting for domain to come up
	I0127 15:35:37.458891 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:37.459594 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | unable to find current IP address of domain default-k8s-diff-port-912913 in network mk-default-k8s-diff-port-912913
	I0127 15:35:37.459632 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | I0127 15:35:37.459555 1075197 retry.go:31] will retry after 534.973183ms: waiting for domain to come up
	I0127 15:35:37.995925 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:37.996442 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | unable to find current IP address of domain default-k8s-diff-port-912913 in network mk-default-k8s-diff-port-912913
	I0127 15:35:37.996478 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | I0127 15:35:37.996410 1075197 retry.go:31] will retry after 573.88889ms: waiting for domain to come up
	I0127 15:35:38.572159 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:38.572712 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | unable to find current IP address of domain default-k8s-diff-port-912913 in network mk-default-k8s-diff-port-912913
	I0127 15:35:38.572743 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | I0127 15:35:38.572684 1075197 retry.go:31] will retry after 950.119409ms: waiting for domain to come up
	I0127 15:35:39.525029 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:39.525634 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | unable to find current IP address of domain default-k8s-diff-port-912913 in network mk-default-k8s-diff-port-912913
	I0127 15:35:39.525678 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | I0127 15:35:39.525615 1075197 retry.go:31] will retry after 806.990039ms: waiting for domain to come up
	I0127 15:35:40.334162 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:40.334870 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | unable to find current IP address of domain default-k8s-diff-port-912913 in network mk-default-k8s-diff-port-912913
	I0127 15:35:40.334912 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | I0127 15:35:40.334840 1075197 retry.go:31] will retry after 1.426690266s: waiting for domain to come up
	I0127 15:35:41.762822 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:41.763449 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | unable to find current IP address of domain default-k8s-diff-port-912913 in network mk-default-k8s-diff-port-912913
	I0127 15:35:41.763485 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | I0127 15:35:41.763417 1075197 retry.go:31] will retry after 1.329587492s: waiting for domain to come up
	I0127 15:35:43.094234 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:43.094726 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | unable to find current IP address of domain default-k8s-diff-port-912913 in network mk-default-k8s-diff-port-912913
	I0127 15:35:43.094751 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | I0127 15:35:43.094698 1075197 retry.go:31] will retry after 2.202752699s: waiting for domain to come up
	I0127 15:35:45.299931 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:45.300615 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | unable to find current IP address of domain default-k8s-diff-port-912913 in network mk-default-k8s-diff-port-912913
	I0127 15:35:45.300650 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | I0127 15:35:45.300575 1075197 retry.go:31] will retry after 2.628150674s: waiting for domain to come up
	I0127 15:35:47.930852 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:47.931520 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | unable to find current IP address of domain default-k8s-diff-port-912913 in network mk-default-k8s-diff-port-912913
	I0127 15:35:47.931577 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | I0127 15:35:47.931495 1075197 retry.go:31] will retry after 3.576941825s: waiting for domain to come up
	I0127 15:35:51.510059 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:51.510566 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | unable to find current IP address of domain default-k8s-diff-port-912913 in network mk-default-k8s-diff-port-912913
	I0127 15:35:51.510595 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | I0127 15:35:51.510534 1075197 retry.go:31] will retry after 3.736154585s: waiting for domain to come up
	I0127 15:35:55.248233 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:55.248814 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has current primary IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:55.248837 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) found domain IP: 192.168.39.160
	I0127 15:35:55.248882 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) reserving static IP address...
	I0127 15:35:55.249263 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-912913", mac: "52:54:00:04:e7:ab", ip: "192.168.39.160"} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:35:55.249304 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | skip adding static IP to network mk-default-k8s-diff-port-912913 - found existing host DHCP lease matching {name: "default-k8s-diff-port-912913", mac: "52:54:00:04:e7:ab", ip: "192.168.39.160"}
	I0127 15:35:55.249315 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) reserved static IP address 192.168.39.160 for domain default-k8s-diff-port-912913
	I0127 15:35:55.249334 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) waiting for SSH...
	I0127 15:35:55.249345 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Getting to WaitForSSH function...
	I0127 15:35:55.251302 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:55.251669 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:35:55.251714 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:55.251836 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Using SSH client type: external
	I0127 15:35:55.251856 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Using SSH private key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa (-rw-------)
	I0127 15:35:55.251874 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.160 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 15:35:55.251893 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | About to run SSH command:
	I0127 15:35:55.251903 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | exit 0
	I0127 15:35:55.376898 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | SSH cmd err, output: <nil>: 
	I0127 15:35:55.377297 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetConfigRaw
	I0127 15:35:55.378034 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetIP
	I0127 15:35:55.380522 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:55.380826 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:35:55.380878 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:55.381139 1075160 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/default-k8s-diff-port-912913/config.json ...
	I0127 15:35:55.381394 1075160 machine.go:93] provisionDockerMachine start ...
	I0127 15:35:55.381420 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:35:55.381623 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:35:55.383680 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:55.384009 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:35:55.384039 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:55.384158 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:35:55.384336 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:35:55.384533 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:35:55.384683 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:35:55.384852 1075160 main.go:141] libmachine: Using SSH client type: native
	I0127 15:35:55.385119 1075160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0127 15:35:55.385135 1075160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 15:35:55.497845 1075160 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 15:35:55.497882 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetMachineName
	I0127 15:35:55.498159 1075160 buildroot.go:166] provisioning hostname "default-k8s-diff-port-912913"
	I0127 15:35:55.498195 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetMachineName
	I0127 15:35:55.498378 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:35:55.501181 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:55.501535 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:35:55.501569 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:55.501682 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:35:55.501892 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:35:55.502040 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:35:55.502179 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:35:55.502323 1075160 main.go:141] libmachine: Using SSH client type: native
	I0127 15:35:55.502493 1075160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0127 15:35:55.502505 1075160 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-912913 && echo "default-k8s-diff-port-912913" | sudo tee /etc/hostname
	I0127 15:35:55.629765 1075160 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-912913
	
	I0127 15:35:55.629797 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:35:55.632634 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:55.633071 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:35:55.633122 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:55.633418 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:35:55.633645 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:35:55.633830 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:35:55.633985 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:35:55.634161 1075160 main.go:141] libmachine: Using SSH client type: native
	I0127 15:35:55.634406 1075160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0127 15:35:55.634454 1075160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-912913' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-912913/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-912913' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 15:35:55.755431 1075160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 15:35:55.755463 1075160 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20321-1005652/.minikube CaCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20321-1005652/.minikube}
	I0127 15:35:55.755504 1075160 buildroot.go:174] setting up certificates
	I0127 15:35:55.755520 1075160 provision.go:84] configureAuth start
	I0127 15:35:55.755540 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetMachineName
	I0127 15:35:55.755877 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetIP
	I0127 15:35:55.758634 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:55.759040 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:35:55.759071 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:55.759237 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:35:55.761224 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:55.761469 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:35:55.761514 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:55.761663 1075160 provision.go:143] copyHostCerts
	I0127 15:35:55.761720 1075160 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem, removing ...
	I0127 15:35:55.761740 1075160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem
	I0127 15:35:55.761796 1075160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem (1078 bytes)
	I0127 15:35:55.761893 1075160 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem, removing ...
	I0127 15:35:55.761901 1075160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem
	I0127 15:35:55.761919 1075160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem (1123 bytes)
	I0127 15:35:55.761983 1075160 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem, removing ...
	I0127 15:35:55.761990 1075160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem
	I0127 15:35:55.762008 1075160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem (1679 bytes)
	I0127 15:35:55.762074 1075160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-912913 san=[127.0.0.1 192.168.39.160 default-k8s-diff-port-912913 localhost minikube]
	I0127 15:35:55.989028 1075160 provision.go:177] copyRemoteCerts
	I0127 15:35:55.989104 1075160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 15:35:55.989143 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:35:55.992491 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:55.992897 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:35:55.992946 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:55.993078 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:35:55.993296 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:35:55.993486 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:35:55.993655 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:35:56.078976 1075160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 15:35:56.104920 1075160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0127 15:35:56.130447 1075160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 15:35:56.155532 1075160 provision.go:87] duration metric: took 399.993162ms to configureAuth
	I0127 15:35:56.155567 1075160 buildroot.go:189] setting minikube options for container-runtime
	I0127 15:35:56.155756 1075160 config.go:182] Loaded profile config "default-k8s-diff-port-912913": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:35:56.155838 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:35:56.158968 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:56.159418 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:35:56.159452 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:56.159606 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:35:56.159819 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:35:56.160014 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:35:56.160173 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:35:56.160331 1075160 main.go:141] libmachine: Using SSH client type: native
	I0127 15:35:56.160586 1075160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0127 15:35:56.160609 1075160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 15:35:56.394832 1075160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 15:35:56.394871 1075160 machine.go:96] duration metric: took 1.013458494s to provisionDockerMachine
	I0127 15:35:56.394887 1075160 start.go:293] postStartSetup for "default-k8s-diff-port-912913" (driver="kvm2")
	I0127 15:35:56.394902 1075160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 15:35:56.394930 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:35:56.395343 1075160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 15:35:56.395384 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:35:56.398265 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:56.398596 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:35:56.398627 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:56.398831 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:35:56.399056 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:35:56.399255 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:35:56.399453 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:35:56.487986 1075160 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 15:35:56.492410 1075160 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 15:35:56.492438 1075160 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/addons for local assets ...
	I0127 15:35:56.492496 1075160 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/files for local assets ...
	I0127 15:35:56.492580 1075160 filesync.go:149] local asset: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem -> 10128162.pem in /etc/ssl/certs
	I0127 15:35:56.492675 1075160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 15:35:56.502757 1075160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:35:56.528666 1075160 start.go:296] duration metric: took 133.761851ms for postStartSetup
	I0127 15:35:56.528714 1075160 fix.go:56] duration metric: took 21.510298125s for fixHost
	I0127 15:35:56.528737 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:35:56.531601 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:56.532084 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:35:56.532117 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:56.532331 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:35:56.532560 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:35:56.532754 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:35:56.532917 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:35:56.533135 1075160 main.go:141] libmachine: Using SSH client type: native
	I0127 15:35:56.533330 1075160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0127 15:35:56.533341 1075160 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 15:35:56.650206 1075160 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737992156.618826623
	
	I0127 15:35:56.650243 1075160 fix.go:216] guest clock: 1737992156.618826623
	I0127 15:35:56.650254 1075160 fix.go:229] Guest: 2025-01-27 15:35:56.618826623 +0000 UTC Remote: 2025-01-27 15:35:56.528717953 +0000 UTC m=+24.113907486 (delta=90.10867ms)
	I0127 15:35:56.650313 1075160 fix.go:200] guest clock delta is within tolerance: 90.10867ms
	I0127 15:35:56.650330 1075160 start.go:83] releasing machines lock for "default-k8s-diff-port-912913", held for 21.631976301s
	I0127 15:35:56.650377 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:35:56.650688 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetIP
	I0127 15:35:56.653514 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:56.653914 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:35:56.653950 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:56.654147 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:35:56.654828 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:35:56.655023 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:35:56.655118 1075160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 15:35:56.655181 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:35:56.655277 1075160 ssh_runner.go:195] Run: cat /version.json
	I0127 15:35:56.655315 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:35:56.657922 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:56.658252 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:56.658303 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:35:56.658332 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:56.658470 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:35:56.658628 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:35:56.658683 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:35:56.658731 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:56.658805 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:35:56.658937 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:35:56.658975 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:35:56.659101 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:35:56.659266 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:35:56.659421 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:35:56.742425 1075160 ssh_runner.go:195] Run: systemctl --version
	I0127 15:35:56.769162 1075160 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 15:35:56.917888 1075160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 15:35:56.924464 1075160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 15:35:56.924551 1075160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 15:35:56.942907 1075160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 15:35:56.942942 1075160 start.go:495] detecting cgroup driver to use...
	I0127 15:35:56.943023 1075160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 15:35:56.960793 1075160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 15:35:56.975929 1075160 docker.go:217] disabling cri-docker service (if available) ...
	I0127 15:35:56.976006 1075160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 15:35:56.990694 1075160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 15:35:57.006719 1075160 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 15:35:57.134103 1075160 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 15:35:57.272405 1075160 docker.go:233] disabling docker service ...
	I0127 15:35:57.272497 1075160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 15:35:57.290405 1075160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 15:35:57.303100 1075160 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 15:35:57.441545 1075160 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 15:35:57.559160 1075160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 15:35:57.573932 1075160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 15:35:57.593375 1075160 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 15:35:57.593455 1075160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:57.604103 1075160 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 15:35:57.604196 1075160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:57.614614 1075160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:57.624792 1075160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:57.635520 1075160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 15:35:57.646855 1075160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:57.657914 1075160 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:57.676077 1075160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:35:57.687177 1075160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 15:35:57.697713 1075160 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 15:35:57.697788 1075160 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 15:35:57.711451 1075160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 15:35:57.722341 1075160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:35:57.842213 1075160 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 15:35:57.936647 1075160 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 15:35:57.936720 1075160 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 15:35:57.942424 1075160 start.go:563] Will wait 60s for crictl version
	I0127 15:35:57.942512 1075160 ssh_runner.go:195] Run: which crictl
	I0127 15:35:57.946658 1075160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 15:35:57.995189 1075160 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 15:35:57.995258 1075160 ssh_runner.go:195] Run: crio --version
	I0127 15:35:58.027982 1075160 ssh_runner.go:195] Run: crio --version
	I0127 15:35:58.059973 1075160 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 15:35:58.061361 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetIP
	I0127 15:35:58.064320 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:58.064731 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:35:58.064763 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:35:58.064946 1075160 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 15:35:58.069237 1075160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:35:58.082562 1075160 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-912913 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-912
913 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 15:35:58.082685 1075160 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 15:35:58.082731 1075160 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:35:58.120396 1075160 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 15:35:58.120479 1075160 ssh_runner.go:195] Run: which lz4
	I0127 15:35:58.124678 1075160 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 15:35:58.128804 1075160 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 15:35:58.128833 1075160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 15:35:59.582941 1075160 crio.go:462] duration metric: took 1.458295303s to copy over tarball
	I0127 15:35:59.583060 1075160 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 15:36:01.988647 1075160 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.405547567s)
	I0127 15:36:01.988686 1075160 crio.go:469] duration metric: took 2.405708918s to extract the tarball
	I0127 15:36:01.988696 1075160 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 15:36:02.028216 1075160 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:36:02.083605 1075160 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 15:36:02.083631 1075160 cache_images.go:84] Images are preloaded, skipping loading
	I0127 15:36:02.083641 1075160 kubeadm.go:934] updating node { 192.168.39.160 8444 v1.32.1 crio true true} ...
	I0127 15:36:02.083763 1075160 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-912913 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-912913 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 15:36:02.083847 1075160 ssh_runner.go:195] Run: crio config
	I0127 15:36:02.139581 1075160 cni.go:84] Creating CNI manager for ""
	I0127 15:36:02.139609 1075160 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:36:02.139623 1075160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 15:36:02.139654 1075160 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.160 APIServerPort:8444 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-912913 NodeName:default-k8s-diff-port-912913 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 15:36:02.139858 1075160 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.160
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-912913"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.160"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.160"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 15:36:02.139941 1075160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 15:36:02.151912 1075160 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 15:36:02.151986 1075160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 15:36:02.163980 1075160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0127 15:36:02.184546 1075160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 15:36:02.204483 1075160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0127 15:36:02.225474 1075160 ssh_runner.go:195] Run: grep 192.168.39.160	control-plane.minikube.internal$ /etc/hosts
	I0127 15:36:02.230034 1075160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.160	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:36:02.245455 1075160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:36:02.385563 1075160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:36:02.409515 1075160 certs.go:68] Setting up /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/default-k8s-diff-port-912913 for IP: 192.168.39.160
	I0127 15:36:02.409548 1075160 certs.go:194] generating shared ca certs ...
	I0127 15:36:02.409572 1075160 certs.go:226] acquiring lock for ca certs: {Name:mk0f815247e4bef2bde3ddcae95639d1cd9cab24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:36:02.409784 1075160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key
	I0127 15:36:02.409873 1075160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key
	I0127 15:36:02.409891 1075160 certs.go:256] generating profile certs ...
	I0127 15:36:02.410018 1075160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/default-k8s-diff-port-912913/client.key
	I0127 15:36:02.410084 1075160 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/default-k8s-diff-port-912913/apiserver.key.a6cce7e5
	I0127 15:36:02.410124 1075160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/default-k8s-diff-port-912913/proxy-client.key
	I0127 15:36:02.410234 1075160 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem (1338 bytes)
	W0127 15:36:02.410269 1075160 certs.go:480] ignoring /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816_empty.pem, impossibly tiny 0 bytes
	I0127 15:36:02.410282 1075160 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 15:36:02.410345 1075160 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem (1078 bytes)
	I0127 15:36:02.410382 1075160 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem (1123 bytes)
	I0127 15:36:02.410409 1075160 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem (1679 bytes)
	I0127 15:36:02.410451 1075160 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:36:02.411080 1075160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 15:36:02.474684 1075160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 15:36:02.509532 1075160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 15:36:02.557419 1075160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 15:36:02.586538 1075160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/default-k8s-diff-port-912913/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0127 15:36:02.627366 1075160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/default-k8s-diff-port-912913/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 15:36:02.652985 1075160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/default-k8s-diff-port-912913/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 15:36:02.679207 1075160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/default-k8s-diff-port-912913/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 15:36:02.705157 1075160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 15:36:02.729589 1075160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem --> /usr/share/ca-certificates/1012816.pem (1338 bytes)
	I0127 15:36:02.753915 1075160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /usr/share/ca-certificates/10128162.pem (1708 bytes)
	I0127 15:36:02.778567 1075160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 15:36:02.798287 1075160 ssh_runner.go:195] Run: openssl version
	I0127 15:36:02.804636 1075160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1012816.pem && ln -fs /usr/share/ca-certificates/1012816.pem /etc/ssl/certs/1012816.pem"
	I0127 15:36:02.817356 1075160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1012816.pem
	I0127 15:36:02.822120 1075160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 14:20 /usr/share/ca-certificates/1012816.pem
	I0127 15:36:02.822174 1075160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1012816.pem
	I0127 15:36:02.828636 1075160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1012816.pem /etc/ssl/certs/51391683.0"
	I0127 15:36:02.841231 1075160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10128162.pem && ln -fs /usr/share/ca-certificates/10128162.pem /etc/ssl/certs/10128162.pem"
	I0127 15:36:02.853073 1075160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10128162.pem
	I0127 15:36:02.858213 1075160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 14:20 /usr/share/ca-certificates/10128162.pem
	I0127 15:36:02.858269 1075160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10128162.pem
	I0127 15:36:02.864174 1075160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10128162.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 15:36:02.875905 1075160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 15:36:02.887715 1075160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:36:02.892563 1075160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 14:06 /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:36:02.892653 1075160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:36:02.898933 1075160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 15:36:02.911205 1075160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 15:36:02.916197 1075160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 15:36:02.923456 1075160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 15:36:02.930261 1075160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 15:36:02.936861 1075160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 15:36:02.944719 1075160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 15:36:02.951215 1075160 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 15:36:02.957484 1075160 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-912913 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:default-k8s-diff-port-912913
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.160 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:36:02.957571 1075160 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 15:36:02.957643 1075160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:36:03.001640 1075160 cri.go:89] found id: ""
	I0127 15:36:03.001732 1075160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 15:36:03.012019 1075160 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 15:36:03.012042 1075160 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 15:36:03.012110 1075160 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 15:36:03.021860 1075160 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 15:36:03.022651 1075160 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-912913" does not appear in /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:36:03.022935 1075160 kubeconfig.go:62] /home/jenkins/minikube-integration/20321-1005652/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-912913" cluster setting kubeconfig missing "default-k8s-diff-port-912913" context setting]
	I0127 15:36:03.023540 1075160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:36:03.147276 1075160 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 15:36:03.162562 1075160 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.160
	I0127 15:36:03.162620 1075160 kubeadm.go:1160] stopping kube-system containers ...
	I0127 15:36:03.162640 1075160 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 15:36:03.162709 1075160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:36:03.203121 1075160 cri.go:89] found id: ""
	I0127 15:36:03.203197 1075160 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 15:36:03.221469 1075160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:36:03.233291 1075160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:36:03.233316 1075160 kubeadm.go:157] found existing configuration files:
	
	I0127 15:36:03.233364 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 15:36:03.244178 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:36:03.244254 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:36:03.255989 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 15:36:03.267120 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:36:03.267196 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:36:03.277911 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 15:36:03.288552 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:36:03.288627 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:36:03.299427 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 15:36:03.311503 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:36:03.311567 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:36:03.323052 1075160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:36:03.334218 1075160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:36:03.759929 1075160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:36:04.549523 1075160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:36:04.782297 1075160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:36:04.847622 1075160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:36:04.926512 1075160 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:36:04.926612 1075160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:36:05.427667 1075160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:36:05.927512 1075160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:36:06.037391 1075160 api_server.go:72] duration metric: took 1.11088005s to wait for apiserver process to appear ...
	I0127 15:36:06.037424 1075160 api_server.go:88] waiting for apiserver healthz status ...
	I0127 15:36:06.037457 1075160 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8444/healthz ...
	I0127 15:36:06.037993 1075160 api_server.go:269] stopped: https://192.168.39.160:8444/healthz: Get "https://192.168.39.160:8444/healthz": dial tcp 192.168.39.160:8444: connect: connection refused
	I0127 15:36:06.537625 1075160 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8444/healthz ...
	I0127 15:36:08.618701 1075160 api_server.go:279] https://192.168.39.160:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 15:36:08.618732 1075160 api_server.go:103] status: https://192.168.39.160:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 15:36:08.618751 1075160 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8444/healthz ...
	I0127 15:36:08.631416 1075160 api_server.go:279] https://192.168.39.160:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 15:36:08.631464 1075160 api_server.go:103] status: https://192.168.39.160:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 15:36:09.038187 1075160 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8444/healthz ...
	I0127 15:36:09.046214 1075160 api_server.go:279] https://192.168.39.160:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 15:36:09.046247 1075160 api_server.go:103] status: https://192.168.39.160:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 15:36:09.537845 1075160 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8444/healthz ...
	I0127 15:36:09.545576 1075160 api_server.go:279] https://192.168.39.160:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 15:36:09.545611 1075160 api_server.go:103] status: https://192.168.39.160:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 15:36:10.038334 1075160 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8444/healthz ...
	I0127 15:36:10.050063 1075160 api_server.go:279] https://192.168.39.160:8444/healthz returned 200:
	ok
	I0127 15:36:10.062514 1075160 api_server.go:141] control plane version: v1.32.1
	I0127 15:36:10.062549 1075160 api_server.go:131] duration metric: took 4.025117264s to wait for apiserver health ...
	I0127 15:36:10.062561 1075160 cni.go:84] Creating CNI manager for ""
	I0127 15:36:10.062567 1075160 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:36:10.064538 1075160 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 15:36:10.066056 1075160 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 15:36:10.098620 1075160 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 15:36:10.130577 1075160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 15:36:10.141368 1075160 system_pods.go:59] 8 kube-system pods found
	I0127 15:36:10.141415 1075160 system_pods.go:61] "coredns-668d6bf9bc-mqwlf" [844e8477-80be-4301-98bb-784769b8b1a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 15:36:10.141428 1075160 system_pods.go:61] "etcd-default-k8s-diff-port-912913" [a79c4004-7ede-494f-8cc3-a1a325320d3a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 15:36:10.141440 1075160 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-912913" [d08f6741-5aa9-48c6-a2ee-9e17ab9ef74a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 15:36:10.141453 1075160 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-912913" [d1e51f98-00fd-46b5-a1ec-ff6f520a98be] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 15:36:10.141459 1075160 system_pods.go:61] "kube-proxy-h9f9h" [e9e77781-4a36-4958-a983-bba20baa1a8b] Running
	I0127 15:36:10.141469 1075160 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-912913" [7fac1d83-e493-43a1-b6fa-a9c28978ccf1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 15:36:10.141479 1075160 system_pods.go:61] "metrics-server-f79f97bbb-nj5f8" [3a9039f3-8905-405f-9de2-c1efad13d7c5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 15:36:10.141493 1075160 system_pods.go:61] "storage-provisioner" [d1de5251-efee-4e36-99e3-163bf8d897e3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 15:36:10.141503 1075160 system_pods.go:74] duration metric: took 10.895292ms to wait for pod list to return data ...
	I0127 15:36:10.141518 1075160 node_conditions.go:102] verifying NodePressure condition ...
	I0127 15:36:10.147884 1075160 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 15:36:10.147921 1075160 node_conditions.go:123] node cpu capacity is 2
	I0127 15:36:10.147935 1075160 node_conditions.go:105] duration metric: took 6.409618ms to run NodePressure ...
	I0127 15:36:10.147957 1075160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:36:10.477736 1075160 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 15:36:10.482900 1075160 kubeadm.go:739] kubelet initialised
	I0127 15:36:10.482933 1075160 kubeadm.go:740] duration metric: took 5.168428ms waiting for restarted kubelet to initialise ...
	I0127 15:36:10.482947 1075160 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:36:10.487609 1075160 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-mqwlf" in "kube-system" namespace to be "Ready" ...
	I0127 15:36:12.495005 1075160 pod_ready.go:103] pod "coredns-668d6bf9bc-mqwlf" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:14.994335 1075160 pod_ready.go:103] pod "coredns-668d6bf9bc-mqwlf" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:16.994442 1075160 pod_ready.go:103] pod "coredns-668d6bf9bc-mqwlf" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:18.999057 1075160 pod_ready.go:103] pod "coredns-668d6bf9bc-mqwlf" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:20.994248 1075160 pod_ready.go:93] pod "coredns-668d6bf9bc-mqwlf" in "kube-system" namespace has status "Ready":"True"
	I0127 15:36:20.994282 1075160 pod_ready.go:82] duration metric: took 10.506645107s for pod "coredns-668d6bf9bc-mqwlf" in "kube-system" namespace to be "Ready" ...
	I0127 15:36:20.994296 1075160 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:36:20.999204 1075160 pod_ready.go:93] pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:36:20.999232 1075160 pod_ready.go:82] duration metric: took 4.928624ms for pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:36:20.999241 1075160 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:36:22.506180 1075160 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:36:22.506209 1075160 pod_ready.go:82] duration metric: took 1.506958805s for pod "kube-apiserver-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:36:22.506224 1075160 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:36:22.511381 1075160 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:36:22.511404 1075160 pod_ready.go:82] duration metric: took 5.171762ms for pod "kube-controller-manager-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:36:22.511414 1075160 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h9f9h" in "kube-system" namespace to be "Ready" ...
	I0127 15:36:22.516051 1075160 pod_ready.go:93] pod "kube-proxy-h9f9h" in "kube-system" namespace has status "Ready":"True"
	I0127 15:36:22.516071 1075160 pod_ready.go:82] duration metric: took 4.651666ms for pod "kube-proxy-h9f9h" in "kube-system" namespace to be "Ready" ...
	I0127 15:36:22.516080 1075160 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:36:24.521878 1075160 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:36:24.521906 1075160 pod_ready.go:82] duration metric: took 2.005819747s for pod "kube-scheduler-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:36:24.521916 1075160 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace to be "Ready" ...
	I0127 15:36:26.528680 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:29.031164 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:31.529127 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:33.529659 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:36.028635 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:38.528659 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:40.529680 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:43.028783 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:45.528485 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:47.528610 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:49.529713 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:52.028834 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:54.028936 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:56.528331 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:36:59.028192 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:01.029101 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:03.528597 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:05.529458 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:07.530233 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:10.029739 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:12.529775 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:15.028961 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:17.029985 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:19.528663 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:21.534783 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:24.028608 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:26.528912 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:28.529572 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:31.029043 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:33.528570 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:35.529032 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:38.029405 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:40.529309 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:43.029809 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:45.031357 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:47.529106 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:50.027979 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:52.028444 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:54.028868 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:56.529274 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:58.529695 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:01.029818 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:03.529199 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:05.530455 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:07.534482 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:10.029370 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:12.528917 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:14.531412 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:17.028589 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:19.029508 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:21.031738 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:23.529051 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:25.530450 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:28.030083 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:30.030174 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:32.529206 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:35.028518 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:37.031307 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:39.529329 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:42.028378 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:44.028619 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:46.029083 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:48.029471 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:50.529718 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:53.028591 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:55.529910 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:58.028613 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:00.529908 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:03.029275 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:05.029418 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:07.030052 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:09.528140 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:11.529355 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:13.529804 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:16.028749 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:18.029709 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:20.529523 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:23.029776 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:25.529747 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:27.530494 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:30.028046 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:32.030227 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:34.530278 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:37.028574 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:39.028638 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:41.029306 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:43.529161 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:46.028736 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:48.029038 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:50.529674 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:52.529918 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:55.028235 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:57.029052 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:59.528435 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:01.530232 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:04.030283 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:06.529988 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:09.029666 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:11.529388 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:14.028951 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:16.029783 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:18.529412 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:21.029561 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:23.031094 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:24.523077 1075160 pod_ready.go:82] duration metric: took 4m0.001138229s for pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace to be "Ready" ...
	E0127 15:40:24.523130 1075160 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 15:40:24.523156 1075160 pod_ready.go:39] duration metric: took 4m14.040193884s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:24.523186 1075160 kubeadm.go:597] duration metric: took 4m21.511137654s to restartPrimaryControlPlane
	W0127 15:40:24.523251 1075160 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 15:40:24.523280 1075160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:40:52.460023 1075160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.936714261s)
	I0127 15:40:52.460128 1075160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:40:52.476845 1075160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:40:52.487966 1075160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:40:52.499961 1075160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:40:52.499988 1075160 kubeadm.go:157] found existing configuration files:
	
	I0127 15:40:52.500037 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 15:40:52.511034 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:40:52.511115 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:40:52.524517 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 15:40:52.534966 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:40:52.535048 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:40:52.545245 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 15:40:52.555070 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:40:52.555149 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:40:52.569605 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 15:40:52.581711 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:40:52.581794 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:40:52.592228 1075160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:40:52.654498 1075160 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 15:40:52.654647 1075160 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:40:52.779741 1075160 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:40:52.779912 1075160 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:40:52.780069 1075160 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 15:40:52.790096 1075160 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:40:52.793113 1075160 out.go:235]   - Generating certificates and keys ...
	I0127 15:40:52.793243 1075160 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:40:52.793339 1075160 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:40:52.793480 1075160 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:40:52.793582 1075160 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:40:52.793692 1075160 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:40:52.793783 1075160 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:40:52.793875 1075160 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:40:52.793966 1075160 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:40:52.794100 1075160 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:40:52.794204 1075160 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:40:52.794273 1075160 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:40:52.794363 1075160 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:40:52.989346 1075160 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:40:53.518286 1075160 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 15:40:53.684220 1075160 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:40:53.833269 1075160 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:40:53.959433 1075160 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:40:53.959944 1075160 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:40:53.962645 1075160 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:40:53.964848 1075160 out.go:235]   - Booting up control plane ...
	I0127 15:40:53.964986 1075160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:40:53.965139 1075160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:40:53.967441 1075160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:40:53.990143 1075160 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:40:53.997601 1075160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:40:53.997684 1075160 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:40:54.175814 1075160 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 15:40:54.175985 1075160 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 15:40:54.677251 1075160 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.539769ms
	I0127 15:40:54.677364 1075160 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 15:41:00.679789 1075160 kubeadm.go:310] [api-check] The API server is healthy after 6.002206079s
	I0127 15:41:00.695507 1075160 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 15:41:00.712356 1075160 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 15:41:00.738343 1075160 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 15:41:00.738640 1075160 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-912913 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 15:41:00.753238 1075160 kubeadm.go:310] [bootstrap-token] Using token: 5gsmwo.93b5mx0ng9gboctz
	I0127 15:41:00.754589 1075160 out.go:235]   - Configuring RBAC rules ...
	I0127 15:41:00.754718 1075160 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 15:41:00.773508 1075160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 15:41:00.781170 1075160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 15:41:00.784358 1075160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 15:41:00.787629 1075160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 15:41:00.790904 1075160 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 15:41:01.087298 1075160 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 15:41:01.539193 1075160 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 15:41:02.088850 1075160 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 15:41:02.089949 1075160 kubeadm.go:310] 
	I0127 15:41:02.090088 1075160 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 15:41:02.090112 1075160 kubeadm.go:310] 
	I0127 15:41:02.090212 1075160 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 15:41:02.090222 1075160 kubeadm.go:310] 
	I0127 15:41:02.090256 1075160 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 15:41:02.090363 1075160 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 15:41:02.090438 1075160 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 15:41:02.090447 1075160 kubeadm.go:310] 
	I0127 15:41:02.090529 1075160 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 15:41:02.090542 1075160 kubeadm.go:310] 
	I0127 15:41:02.090605 1075160 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 15:41:02.090612 1075160 kubeadm.go:310] 
	I0127 15:41:02.090674 1075160 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 15:41:02.090813 1075160 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 15:41:02.090903 1075160 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 15:41:02.090913 1075160 kubeadm.go:310] 
	I0127 15:41:02.091020 1075160 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 15:41:02.091116 1075160 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 15:41:02.091126 1075160 kubeadm.go:310] 
	I0127 15:41:02.091223 1075160 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 5gsmwo.93b5mx0ng9gboctz \
	I0127 15:41:02.091357 1075160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 \
	I0127 15:41:02.091383 1075160 kubeadm.go:310] 	--control-plane 
	I0127 15:41:02.091393 1075160 kubeadm.go:310] 
	I0127 15:41:02.091482 1075160 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 15:41:02.091490 1075160 kubeadm.go:310] 
	I0127 15:41:02.091576 1075160 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5gsmwo.93b5mx0ng9gboctz \
	I0127 15:41:02.091686 1075160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 
	I0127 15:41:02.093055 1075160 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:41:02.093120 1075160 cni.go:84] Creating CNI manager for ""
	I0127 15:41:02.093134 1075160 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:41:02.095065 1075160 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 15:41:02.096511 1075160 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 15:41:02.110508 1075160 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 15:41:02.132628 1075160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 15:41:02.132723 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:02.132745 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-912913 minikube.k8s.io/updated_at=2025_01_27T15_41_02_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743 minikube.k8s.io/name=default-k8s-diff-port-912913 minikube.k8s.io/primary=true
	I0127 15:41:02.380721 1075160 ops.go:34] apiserver oom_adj: -16
	I0127 15:41:02.380856 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:02.881961 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:03.381153 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:03.881177 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:04.381381 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:04.881601 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:05.381394 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:05.881197 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:05.963844 1075160 kubeadm.go:1113] duration metric: took 3.831201657s to wait for elevateKubeSystemPrivileges
	I0127 15:41:05.963884 1075160 kubeadm.go:394] duration metric: took 5m3.006407652s to StartCluster
	I0127 15:41:05.963905 1075160 settings.go:142] acquiring lock: {Name:mk76722a8563186e0a733747d866232f054026c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:41:05.964014 1075160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:41:05.966708 1075160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:41:05.967090 1075160 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.160 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 15:41:05.967165 1075160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 15:41:05.967282 1075160 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-912913"
	I0127 15:41:05.967302 1075160 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-912913"
	W0127 15:41:05.967308 1075160 addons.go:247] addon storage-provisioner should already be in state true
	I0127 15:41:05.967326 1075160 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-912913"
	I0127 15:41:05.967343 1075160 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-912913"
	I0127 15:41:05.967355 1075160 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-912913"
	I0127 15:41:05.967358 1075160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-912913"
	I0127 15:41:05.967357 1075160 config.go:182] Loaded profile config "default-k8s-diff-port-912913": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:41:05.967356 1075160 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-912913"
	I0127 15:41:05.967381 1075160 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-912913"
	W0127 15:41:05.967390 1075160 addons.go:247] addon dashboard should already be in state true
	W0127 15:41:05.967362 1075160 addons.go:247] addon metrics-server should already be in state true
	I0127 15:41:05.967334 1075160 host.go:66] Checking if "default-k8s-diff-port-912913" exists ...
	I0127 15:41:05.967433 1075160 host.go:66] Checking if "default-k8s-diff-port-912913" exists ...
	I0127 15:41:05.967433 1075160 host.go:66] Checking if "default-k8s-diff-port-912913" exists ...
	I0127 15:41:05.967803 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.967829 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.967842 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.967854 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.967866 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.967894 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.967857 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.967954 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.968953 1075160 out.go:177] * Verifying Kubernetes components...
	I0127 15:41:05.970726 1075160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:41:05.986076 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37879
	I0127 15:41:05.986613 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:05.987340 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:05.987367 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:05.987696 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40949
	I0127 15:41:05.987879 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35511
	I0127 15:41:05.987883 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45871
	I0127 15:41:05.987924 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:05.988072 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:05.988235 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:05.988485 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:05.988597 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.988641 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.988725 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:05.988745 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:05.988760 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:05.988775 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:05.989142 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:05.989164 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:05.989172 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:05.989192 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:05.989534 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:05.989721 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:05.989770 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.989789 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.989815 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.989827 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.993646 1075160 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-912913"
	W0127 15:41:05.993672 1075160 addons.go:247] addon default-storageclass should already be in state true
	I0127 15:41:05.993703 1075160 host.go:66] Checking if "default-k8s-diff-port-912913" exists ...
	I0127 15:41:05.994089 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.994137 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:06.007391 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36511
	I0127 15:41:06.007784 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39871
	I0127 15:41:06.008229 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.008327 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.008859 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.008880 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.008951 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I0127 15:41:06.009182 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.009201 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.009660 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.009740 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.009876 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:06.010328 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.010393 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.010588 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.010748 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:06.010833 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.025187 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:06.025199 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:41:06.025187 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:41:06.025186 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46703
	I0127 15:41:06.037186 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:41:06.037801 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.038419 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.038439 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.038833 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.039733 1075160 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 15:41:06.039865 1075160 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:41:06.039911 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:06.039947 1075160 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 15:41:06.039975 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:06.041831 1075160 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 15:41:06.041853 1075160 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 15:41:06.041887 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:41:06.042817 1075160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:41:06.042833 1075160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 15:41:06.042854 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:41:06.045474 1075160 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 15:41:06.047233 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.047253 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 15:41:06.047270 1075160 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 15:41:06.047294 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:41:06.047965 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:41:06.048037 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.048421 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:41:06.048675 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:41:06.049034 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:41:06.049616 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:41:06.051299 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.051321 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.051717 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:41:06.051739 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.052033 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:41:06.052054 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.052088 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:41:06.052323 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:41:06.052372 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:41:06.052526 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:41:06.052702 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:41:06.057244 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:41:06.057489 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:41:06.057880 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:41:06.058959 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39803
	I0127 15:41:06.059421 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.059854 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.059866 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.060259 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.060421 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:06.062233 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:41:06.062753 1075160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 15:41:06.062767 1075160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 15:41:06.062781 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:41:06.067605 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.068014 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:41:06.068027 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.068243 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:41:06.068368 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:41:06.068559 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:41:06.068695 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:41:06.211887 1075160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:41:06.257549 1075160 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-912913" to be "Ready" ...
	I0127 15:41:06.305423 1075160 node_ready.go:49] node "default-k8s-diff-port-912913" has status "Ready":"True"
	I0127 15:41:06.305459 1075160 node_ready.go:38] duration metric: took 47.864404ms for node "default-k8s-diff-port-912913" to be "Ready" ...
	I0127 15:41:06.305474 1075160 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:41:06.311746 1075160 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 15:41:06.311780 1075160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 15:41:06.329198 1075160 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:06.374086 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 15:41:06.374119 1075160 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 15:41:06.377742 1075160 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 15:41:06.377771 1075160 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 15:41:06.400332 1075160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:41:06.403004 1075160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 15:41:06.430195 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 15:41:06.430217 1075160 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 15:41:06.487574 1075160 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:41:06.487605 1075160 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 15:41:06.529999 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 15:41:06.530054 1075160 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 15:41:06.609758 1075160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:41:06.619520 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 15:41:06.619567 1075160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 15:41:06.795826 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 15:41:06.795870 1075160 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 15:41:06.889910 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 15:41:06.889940 1075160 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 15:41:06.979355 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 15:41:06.979391 1075160 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 15:41:07.053404 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 15:41:07.053438 1075160 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 15:41:07.101199 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:41:07.101235 1075160 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 15:41:07.165859 1075160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:41:07.419725 1075160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.016680012s)
	I0127 15:41:07.419820 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.419839 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.419841 1075160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.019463574s)
	I0127 15:41:07.419916 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.419939 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.420292 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.420306 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.420322 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.420352 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.420365 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.420366 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.420492 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.420521 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.420530 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.420538 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.420775 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.420779 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.420786 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.420814 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.420842 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.420849 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.438640 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.438681 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.439056 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.439081 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.439091 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.791715 1075160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.18189835s)
	I0127 15:41:07.791796 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.791813 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.792148 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.792170 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.792181 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.792190 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.792522 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.792570 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.792580 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.792591 1075160 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-912913"
	I0127 15:41:08.375027 1075160 pod_ready.go:103] pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"False"
	I0127 15:41:08.535318 1075160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.369395363s)
	I0127 15:41:08.535382 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:08.535398 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:08.535779 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:08.535833 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:08.535847 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:08.535857 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:08.536129 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:08.536152 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:08.537800 1075160 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-912913 addons enable metrics-server
	
	I0127 15:41:08.539323 1075160 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 15:41:08.540713 1075160 addons.go:514] duration metric: took 2.57355558s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 15:41:10.869256 1075160 pod_ready.go:103] pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"False"
	I0127 15:41:13.336050 1075160 pod_ready.go:103] pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"False"
	I0127 15:41:15.338501 1075160 pod_ready.go:93] pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:41:15.338533 1075160 pod_ready.go:82] duration metric: took 9.009294324s for pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.338546 1075160 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.343866 1075160 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:41:15.343889 1075160 pod_ready.go:82] duration metric: took 5.336104ms for pod "kube-apiserver-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.343898 1075160 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.349389 1075160 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:41:15.349413 1075160 pod_ready.go:82] duration metric: took 5.508752ms for pod "kube-controller-manager-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.349422 1075160 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.355144 1075160 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:41:15.355166 1075160 pod_ready.go:82] duration metric: took 5.737289ms for pod "kube-scheduler-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.355173 1075160 pod_ready.go:39] duration metric: took 9.049686447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:41:15.355191 1075160 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:41:15.355243 1075160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:15.370942 1075160 api_server.go:72] duration metric: took 9.403809848s to wait for apiserver process to appear ...
	I0127 15:41:15.370967 1075160 api_server.go:88] waiting for apiserver healthz status ...
	I0127 15:41:15.370986 1075160 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8444/healthz ...
	I0127 15:41:15.378733 1075160 api_server.go:279] https://192.168.39.160:8444/healthz returned 200:
	ok
	I0127 15:41:15.380614 1075160 api_server.go:141] control plane version: v1.32.1
	I0127 15:41:15.380640 1075160 api_server.go:131] duration metric: took 9.666454ms to wait for apiserver health ...
	I0127 15:41:15.380649 1075160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 15:41:15.390107 1075160 system_pods.go:59] 9 kube-system pods found
	I0127 15:41:15.390141 1075160 system_pods.go:61] "coredns-668d6bf9bc-8rzrt" [92e346ae-cc28-4f80-9424-c4d97ac8106c] Running
	I0127 15:41:15.390147 1075160 system_pods.go:61] "coredns-668d6bf9bc-zw9rm" [c29a853d-5146-4641-a434-d85147dc3b16] Running
	I0127 15:41:15.390151 1075160 system_pods.go:61] "etcd-default-k8s-diff-port-912913" [4eb15463-b135-4347-9c0b-ff5cd9fa0991] Running
	I0127 15:41:15.390155 1075160 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-912913" [f1d151d9-bd66-41f1-b2e8-bb495f8a3522] Running
	I0127 15:41:15.390159 1075160 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-912913" [da81a47f-a89e-4daa-828c-e1dc1458067c] Running
	I0127 15:41:15.390161 1075160 system_pods.go:61] "kube-proxy-k85rn" [8da8dc48-3019-4fa6-b5c4-58b0b41aefc0] Running
	I0127 15:41:15.390165 1075160 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-912913" [9042c262-515d-40d9-9d99-fda8f49b141a] Running
	I0127 15:41:15.390170 1075160 system_pods.go:61] "metrics-server-f79f97bbb-rtx6b" [aed61473-0cc8-4459-9153-5c42e5a10b2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 15:41:15.390174 1075160 system_pods.go:61] "storage-provisioner" [5fa7b229-cd7d-4aa4-9cee-26a1c5714b3c] Running
	I0127 15:41:15.390184 1075160 system_pods.go:74] duration metric: took 9.526361ms to wait for pod list to return data ...
	I0127 15:41:15.390193 1075160 default_sa.go:34] waiting for default service account to be created ...
	I0127 15:41:15.394345 1075160 default_sa.go:45] found service account: "default"
	I0127 15:41:15.394371 1075160 default_sa.go:55] duration metric: took 4.169137ms for default service account to be created ...
	I0127 15:41:15.394380 1075160 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 15:41:15.537654 1075160 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-912913 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-912913 -n default-k8s-diff-port-912913
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-912913 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-912913 logs -n 25: (1.451688552s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:33 UTC |
	|         | default-k8s-diff-port-912913                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-458006             | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-458006                                   | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-349782            | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-349782                                  | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:35 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-912913  | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC | 27 Jan 25 15:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC | 27 Jan 25 15:35 UTC |
	|         | default-k8s-diff-port-912913                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-458006                  | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC | 27 Jan 25 15:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-458006                                   | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-349782                 | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC | 27 Jan 25 15:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-349782                                  | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-912913       | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC | 27 Jan 25 15:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC |                     |
	|         | default-k8s-diff-port-912913                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-405706        | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:36 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-405706                              | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:37 UTC | 27 Jan 25 15:37 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-405706             | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:37 UTC | 27 Jan 25 15:37 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-405706                              | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-405706                              | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 16:01 UTC | 27 Jan 25 16:01 UTC |
	| start   | -p newest-cni-964010 --memory=2200 --alsologtostderr   | newest-cni-964010            | jenkins | v1.35.0 | 27 Jan 25 16:01 UTC | 27 Jan 25 16:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-458006                                   | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 16:01 UTC | 27 Jan 25 16:01 UTC |
	| addons  | enable metrics-server -p newest-cni-964010             | newest-cni-964010            | jenkins | v1.35.0 | 27 Jan 25 16:02 UTC | 27 Jan 25 16:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-964010                                   | newest-cni-964010            | jenkins | v1.35.0 | 27 Jan 25 16:02 UTC | 27 Jan 25 16:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-964010                  | newest-cni-964010            | jenkins | v1.35.0 | 27 Jan 25 16:02 UTC | 27 Jan 25 16:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-964010 --memory=2200 --alsologtostderr   | newest-cni-964010            | jenkins | v1.35.0 | 27 Jan 25 16:02 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-349782                                  | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 16:02 UTC | 27 Jan 25 16:02 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 16:02:19
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 16:02:19.261377 1082222 out.go:345] Setting OutFile to fd 1 ...
	I0127 16:02:19.261477 1082222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 16:02:19.261482 1082222 out.go:358] Setting ErrFile to fd 2...
	I0127 16:02:19.261486 1082222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 16:02:19.261686 1082222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 16:02:19.262260 1082222 out.go:352] Setting JSON to false
	I0127 16:02:19.263221 1082222 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":24286,"bootTime":1737969453,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 16:02:19.263348 1082222 start.go:139] virtualization: kvm guest
	I0127 16:02:19.265748 1082222 out.go:177] * [newest-cni-964010] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 16:02:19.267453 1082222 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 16:02:19.267449 1082222 notify.go:220] Checking for updates...
	I0127 16:02:19.270796 1082222 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 16:02:19.272103 1082222 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 16:02:19.273540 1082222 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 16:02:19.274961 1082222 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 16:02:19.276419 1082222 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 16:02:19.278185 1082222 config.go:182] Loaded profile config "newest-cni-964010": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 16:02:19.278753 1082222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 16:02:19.278849 1082222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 16:02:19.294966 1082222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45475
	I0127 16:02:19.295506 1082222 main.go:141] libmachine: () Calling .GetVersion
	I0127 16:02:19.296105 1082222 main.go:141] libmachine: Using API Version  1
	I0127 16:02:19.296129 1082222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 16:02:19.296560 1082222 main.go:141] libmachine: () Calling .GetMachineName
	I0127 16:02:19.296757 1082222 main.go:141] libmachine: (newest-cni-964010) Calling .DriverName
	I0127 16:02:19.297053 1082222 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 16:02:19.297370 1082222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 16:02:19.297408 1082222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 16:02:19.313334 1082222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42991
	I0127 16:02:19.313859 1082222 main.go:141] libmachine: () Calling .GetVersion
	I0127 16:02:19.314470 1082222 main.go:141] libmachine: Using API Version  1
	I0127 16:02:19.314507 1082222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 16:02:19.314845 1082222 main.go:141] libmachine: () Calling .GetMachineName
	I0127 16:02:19.315090 1082222 main.go:141] libmachine: (newest-cni-964010) Calling .DriverName
	I0127 16:02:19.353432 1082222 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 16:02:19.354837 1082222 start.go:297] selected driver: kvm2
	I0127 16:02:19.354851 1082222 start.go:901] validating driver "kvm2" against &{Name:newest-cni-964010 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-964010 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.15 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 16:02:19.354970 1082222 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 16:02:19.355728 1082222 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 16:02:19.355814 1082222 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 16:02:19.372427 1082222 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 16:02:19.372827 1082222 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 16:02:19.372863 1082222 cni.go:84] Creating CNI manager for ""
	I0127 16:02:19.372912 1082222 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 16:02:19.372948 1082222 start.go:340] cluster config:
	{Name:newest-cni-964010 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-964010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.15 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 16:02:19.373113 1082222 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 16:02:19.375105 1082222 out.go:177] * Starting "newest-cni-964010" primary control-plane node in "newest-cni-964010" cluster
	I0127 16:02:19.376452 1082222 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 16:02:19.376494 1082222 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 16:02:19.376505 1082222 cache.go:56] Caching tarball of preloaded images
	I0127 16:02:19.376583 1082222 preload.go:172] Found /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 16:02:19.376593 1082222 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 16:02:19.376700 1082222 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/newest-cni-964010/config.json ...
	I0127 16:02:19.376878 1082222 start.go:360] acquireMachinesLock for newest-cni-964010: {Name:mk884e36253ca066a698970989f20649e5f9cbef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 16:02:19.376925 1082222 start.go:364] duration metric: took 28.939µs to acquireMachinesLock for "newest-cni-964010"
	I0127 16:02:19.376939 1082222 start.go:96] Skipping create...Using existing machine configuration
	I0127 16:02:19.376947 1082222 fix.go:54] fixHost starting: 
	I0127 16:02:19.377244 1082222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 16:02:19.377280 1082222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 16:02:19.393084 1082222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38449
	I0127 16:02:19.393572 1082222 main.go:141] libmachine: () Calling .GetVersion
	I0127 16:02:19.394171 1082222 main.go:141] libmachine: Using API Version  1
	I0127 16:02:19.394208 1082222 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 16:02:19.394627 1082222 main.go:141] libmachine: () Calling .GetMachineName
	I0127 16:02:19.394887 1082222 main.go:141] libmachine: (newest-cni-964010) Calling .DriverName
	I0127 16:02:19.395168 1082222 main.go:141] libmachine: (newest-cni-964010) Calling .GetState
	I0127 16:02:19.396880 1082222 fix.go:112] recreateIfNeeded on newest-cni-964010: state=Stopped err=<nil>
	I0127 16:02:19.396918 1082222 main.go:141] libmachine: (newest-cni-964010) Calling .DriverName
	W0127 16:02:19.397142 1082222 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 16:02:19.400061 1082222 out.go:177] * Restarting existing kvm2 VM for "newest-cni-964010" ...
	I0127 16:02:19.401592 1082222 main.go:141] libmachine: (newest-cni-964010) Calling .Start
	I0127 16:02:19.401869 1082222 main.go:141] libmachine: (newest-cni-964010) starting domain...
	I0127 16:02:19.401894 1082222 main.go:141] libmachine: (newest-cni-964010) ensuring networks are active...
	I0127 16:02:19.402682 1082222 main.go:141] libmachine: (newest-cni-964010) Ensuring network default is active
	I0127 16:02:19.402987 1082222 main.go:141] libmachine: (newest-cni-964010) Ensuring network mk-newest-cni-964010 is active
	I0127 16:02:19.403290 1082222 main.go:141] libmachine: (newest-cni-964010) getting domain XML...
	I0127 16:02:19.404040 1082222 main.go:141] libmachine: (newest-cni-964010) creating domain...
	I0127 16:02:20.733081 1082222 main.go:141] libmachine: (newest-cni-964010) waiting for IP...
	I0127 16:02:20.734117 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:02:20.734663 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:02:20.734782 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:02:20.734628 1082257 retry.go:31] will retry after 212.938477ms: waiting for domain to come up
	I0127 16:02:20.949540 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:02:20.950228 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:02:20.950256 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:02:20.950201 1082257 retry.go:31] will retry after 339.398747ms: waiting for domain to come up
	I0127 16:02:21.291065 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:02:21.291846 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:02:21.291877 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:02:21.291795 1082257 retry.go:31] will retry after 435.991235ms: waiting for domain to come up
	I0127 16:02:21.729490 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:02:21.730127 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:02:21.730161 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:02:21.730077 1082257 retry.go:31] will retry after 426.623529ms: waiting for domain to come up
	I0127 16:02:22.159087 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:02:22.159719 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:02:22.159756 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:02:22.159676 1082257 retry.go:31] will retry after 748.049598ms: waiting for domain to come up
	I0127 16:02:22.909703 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:02:22.910394 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:02:22.910472 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:02:22.910355 1082257 retry.go:31] will retry after 623.977983ms: waiting for domain to come up
	I0127 16:02:23.591340 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:02:23.591874 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:02:23.591904 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:02:23.591851 1082257 retry.go:31] will retry after 731.014775ms: waiting for domain to come up
	I0127 16:02:24.325162 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:02:24.325806 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:02:24.325832 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:02:24.325764 1082257 retry.go:31] will retry after 1.123829042s: waiting for domain to come up
	I0127 16:02:25.450955 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:02:25.451645 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:02:25.451722 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:02:25.451631 1082257 retry.go:31] will retry after 1.242418526s: waiting for domain to come up
	I0127 16:02:26.695229 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:02:26.695793 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:02:26.695842 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:02:26.695792 1082257 retry.go:31] will retry after 1.725232781s: waiting for domain to come up
	I0127 16:02:28.422387 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:02:28.422947 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:02:28.422976 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:02:28.422893 1082257 retry.go:31] will retry after 2.071296017s: waiting for domain to come up
	I0127 16:02:30.495908 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:02:30.496445 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:02:30.496472 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:02:30.496419 1082257 retry.go:31] will retry after 2.806844761s: waiting for domain to come up
	I0127 16:02:33.306200 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | domain newest-cni-964010 has defined MAC address 52:54:00:2c:a1:0c in network mk-newest-cni-964010
	I0127 16:02:33.306589 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | unable to find current IP address of domain newest-cni-964010 in network mk-newest-cni-964010
	I0127 16:02:33.306614 1082222 main.go:141] libmachine: (newest-cni-964010) DBG | I0127 16:02:33.306565 1082257 retry.go:31] will retry after 2.949016142s: waiting for domain to come up
	
	
	==> CRI-O <==
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.508228533Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993757508210265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f449e07-b6d4-4f19-9040-ae795e342f11 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.508749241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3d7d551e-6013-48a3-8498-a514b47ad073 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.508818880Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3d7d551e-6013-48a3-8498-a514b47ad073 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.509044896Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47ff28a231090f63655d3122ded0bbdad0f09de4110c17ceae9b6fe691b209cd,PodSandboxId:ddbb436bdb987c129e605487093ec61f759e5f35321ae9c047ed3a2b4da21cb6,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737993755519553958,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-gzr4w,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 0c584a6a-9023-474f-9e0f-0ba4d8090c46,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0460e13ef77c3eba96a71dca4ba70a75c1d1f88bd8c05714527fb6104f4c05f8,PodSandboxId:f6d395ff52cdfe99a1b7b7032f01762178ef7bb3bfaa594e411489b2af0267a6,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737992475581524547,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-skt2c,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 329dd03f-b15a-4bd8-8a4a-c9863d805c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb4bf968db758bf19100be33d8cbc4108d2497298d5a246e3324d80ae9f6879,PodSandboxId:19a196d0fad3970632f49f93ca671e7ec20012bbca428c016cf0f126c20390e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992469637859880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9b
c-zw9rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c29a853d-5146-4641-a434-d85147dc3b16,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64287426e1ef5a24cdd83f184bdd2b16d8eecfbbddb429984eaa8b8c1c12b3ee,PodSandboxId:70839deb23b8b60090d28931452787e2f732b1a9fc8e73593d2bd8013cc33dbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956
d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992469609567922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-8rzrt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e346ae-cc28-4f80-9424-c4d97ac8106c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd6634e6f14d98d5b48ac2db81744a8e22c3f304263952c1dfc70bbd50fe05c,PodSandboxId:d1f5ed0ee5436f1bfa22f2e01748c4e5e3d774cc8b63040581e5b4fad3a35ad7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0
,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737992468853175031,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k85rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8da8dc48-3019-4fa6-b5c4-58b0b41aefc0,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea138f2df1b59ee780d3f0c46d60f9f98a734da772da4bb6abfd6ea0d99722a,PodSandboxId:f024442c6b393068d598f89882b0afa7a3986e975dc1092a7c832936bfdde69b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Imag
e:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737992468624239689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fa7b229-cd7d-4aa4-9cee-26a1c5714b3c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c30afb93f8e3eb5e6e6fd67676dad9fa14ec82dad71bb480c03110d2d5f6e0,PodSandboxId:a308e093c3f51bc66786f55f433b569af9039c486df7de361f47341ef3c8e44c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd4
4cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737992455253696805,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e54d60609ac672b809551b4a150ba47f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16370673f2ee1b11649b864c69d1de8c47173694c6d8ea92bc0dbe910779ccb8,PodSandboxId:97feb3a29ddb489aec54941ebc3cc8468c57dc291e345d6941677de7d191c706,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f59
0b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737992455215880294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0296e2e67880a24538c37b98004a9e02,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56db29fac457fe5349a74381b318052fcde7c0fcf023c5bd6e46a10781bfd088,PodSandboxId:3f4584deecaa1c4780d6ee2e02956b78f4a117b40535c9969d9d9335e1e99d31,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c
5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737992455211894273,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d680bc60d5cca33ec9f8efe12b9436,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc849e4b3b43a9fbc33bd0fd3b94e30d93cf7c08b80fb850ddae862b55c6dcb0,PodSandboxId:85164c03763f5b6fdce8435fa0ad8d24c9054004c3c83c5c93d0509d823b9f42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf9
59e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737992455200211816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7502284b645e9731b2637454c4d938bd,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd0d1c3baca46d1381efaf76d0bceef8026a0bb3ada0122eb3da35eb7a3fbfd9,PodSandboxId:adb18c56418037be4ea02c2d9679ee229c490eb2329b39d7ca3dc635ee9bb518,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c0
4427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737992165639545259,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e54d60609ac672b809551b4a150ba47f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3d7d551e-6013-48a3-8498-a514b47ad073 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.548706428Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30612a8b-e96f-478b-ba00-31469539d19b name=/runtime.v1.RuntimeService/Version
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.548799231Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30612a8b-e96f-478b-ba00-31469539d19b name=/runtime.v1.RuntimeService/Version
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.549724368Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ccee7948-63a7-44d6-83af-ce4be9735333 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.550162962Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993757550140061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ccee7948-63a7-44d6-83af-ce4be9735333 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.550774784Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09fc20dc-7f16-436c-a3bd-98010913e943 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.550846235Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09fc20dc-7f16-436c-a3bd-98010913e943 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.551080860Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47ff28a231090f63655d3122ded0bbdad0f09de4110c17ceae9b6fe691b209cd,PodSandboxId:ddbb436bdb987c129e605487093ec61f759e5f35321ae9c047ed3a2b4da21cb6,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737993755519553958,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-gzr4w,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 0c584a6a-9023-474f-9e0f-0ba4d8090c46,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0460e13ef77c3eba96a71dca4ba70a75c1d1f88bd8c05714527fb6104f4c05f8,PodSandboxId:f6d395ff52cdfe99a1b7b7032f01762178ef7bb3bfaa594e411489b2af0267a6,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737992475581524547,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-skt2c,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 329dd03f-b15a-4bd8-8a4a-c9863d805c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb4bf968db758bf19100be33d8cbc4108d2497298d5a246e3324d80ae9f6879,PodSandboxId:19a196d0fad3970632f49f93ca671e7ec20012bbca428c016cf0f126c20390e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992469637859880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9b
c-zw9rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c29a853d-5146-4641-a434-d85147dc3b16,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64287426e1ef5a24cdd83f184bdd2b16d8eecfbbddb429984eaa8b8c1c12b3ee,PodSandboxId:70839deb23b8b60090d28931452787e2f732b1a9fc8e73593d2bd8013cc33dbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956
d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992469609567922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-8rzrt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e346ae-cc28-4f80-9424-c4d97ac8106c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd6634e6f14d98d5b48ac2db81744a8e22c3f304263952c1dfc70bbd50fe05c,PodSandboxId:d1f5ed0ee5436f1bfa22f2e01748c4e5e3d774cc8b63040581e5b4fad3a35ad7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0
,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737992468853175031,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k85rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8da8dc48-3019-4fa6-b5c4-58b0b41aefc0,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea138f2df1b59ee780d3f0c46d60f9f98a734da772da4bb6abfd6ea0d99722a,PodSandboxId:f024442c6b393068d598f89882b0afa7a3986e975dc1092a7c832936bfdde69b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Imag
e:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737992468624239689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fa7b229-cd7d-4aa4-9cee-26a1c5714b3c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c30afb93f8e3eb5e6e6fd67676dad9fa14ec82dad71bb480c03110d2d5f6e0,PodSandboxId:a308e093c3f51bc66786f55f433b569af9039c486df7de361f47341ef3c8e44c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd4
4cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737992455253696805,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e54d60609ac672b809551b4a150ba47f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16370673f2ee1b11649b864c69d1de8c47173694c6d8ea92bc0dbe910779ccb8,PodSandboxId:97feb3a29ddb489aec54941ebc3cc8468c57dc291e345d6941677de7d191c706,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f59
0b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737992455215880294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0296e2e67880a24538c37b98004a9e02,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56db29fac457fe5349a74381b318052fcde7c0fcf023c5bd6e46a10781bfd088,PodSandboxId:3f4584deecaa1c4780d6ee2e02956b78f4a117b40535c9969d9d9335e1e99d31,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c
5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737992455211894273,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d680bc60d5cca33ec9f8efe12b9436,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc849e4b3b43a9fbc33bd0fd3b94e30d93cf7c08b80fb850ddae862b55c6dcb0,PodSandboxId:85164c03763f5b6fdce8435fa0ad8d24c9054004c3c83c5c93d0509d823b9f42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf9
59e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737992455200211816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7502284b645e9731b2637454c4d938bd,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd0d1c3baca46d1381efaf76d0bceef8026a0bb3ada0122eb3da35eb7a3fbfd9,PodSandboxId:adb18c56418037be4ea02c2d9679ee229c490eb2329b39d7ca3dc635ee9bb518,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c0
4427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737992165639545259,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e54d60609ac672b809551b4a150ba47f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09fc20dc-7f16-436c-a3bd-98010913e943 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.590293706Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=05c8108b-4e3d-4dfb-beeb-0f7030734209 name=/runtime.v1.RuntimeService/Version
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.590363679Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=05c8108b-4e3d-4dfb-beeb-0f7030734209 name=/runtime.v1.RuntimeService/Version
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.591837168Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=983cd705-0a5f-459f-8c52-b0bad1649a67 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.592240954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993757592220369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=983cd705-0a5f-459f-8c52-b0bad1649a67 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.593000012Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92f1b915-fe90-4c51-8a42-7ae83811a3b7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.593071275Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92f1b915-fe90-4c51-8a42-7ae83811a3b7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.593291509Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47ff28a231090f63655d3122ded0bbdad0f09de4110c17ceae9b6fe691b209cd,PodSandboxId:ddbb436bdb987c129e605487093ec61f759e5f35321ae9c047ed3a2b4da21cb6,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737993755519553958,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-gzr4w,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 0c584a6a-9023-474f-9e0f-0ba4d8090c46,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0460e13ef77c3eba96a71dca4ba70a75c1d1f88bd8c05714527fb6104f4c05f8,PodSandboxId:f6d395ff52cdfe99a1b7b7032f01762178ef7bb3bfaa594e411489b2af0267a6,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737992475581524547,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-skt2c,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 329dd03f-b15a-4bd8-8a4a-c9863d805c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb4bf968db758bf19100be33d8cbc4108d2497298d5a246e3324d80ae9f6879,PodSandboxId:19a196d0fad3970632f49f93ca671e7ec20012bbca428c016cf0f126c20390e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992469637859880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9b
c-zw9rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c29a853d-5146-4641-a434-d85147dc3b16,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64287426e1ef5a24cdd83f184bdd2b16d8eecfbbddb429984eaa8b8c1c12b3ee,PodSandboxId:70839deb23b8b60090d28931452787e2f732b1a9fc8e73593d2bd8013cc33dbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956
d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992469609567922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-8rzrt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e346ae-cc28-4f80-9424-c4d97ac8106c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd6634e6f14d98d5b48ac2db81744a8e22c3f304263952c1dfc70bbd50fe05c,PodSandboxId:d1f5ed0ee5436f1bfa22f2e01748c4e5e3d774cc8b63040581e5b4fad3a35ad7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0
,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737992468853175031,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k85rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8da8dc48-3019-4fa6-b5c4-58b0b41aefc0,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea138f2df1b59ee780d3f0c46d60f9f98a734da772da4bb6abfd6ea0d99722a,PodSandboxId:f024442c6b393068d598f89882b0afa7a3986e975dc1092a7c832936bfdde69b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Imag
e:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737992468624239689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fa7b229-cd7d-4aa4-9cee-26a1c5714b3c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c30afb93f8e3eb5e6e6fd67676dad9fa14ec82dad71bb480c03110d2d5f6e0,PodSandboxId:a308e093c3f51bc66786f55f433b569af9039c486df7de361f47341ef3c8e44c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd4
4cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737992455253696805,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e54d60609ac672b809551b4a150ba47f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16370673f2ee1b11649b864c69d1de8c47173694c6d8ea92bc0dbe910779ccb8,PodSandboxId:97feb3a29ddb489aec54941ebc3cc8468c57dc291e345d6941677de7d191c706,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f59
0b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737992455215880294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0296e2e67880a24538c37b98004a9e02,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56db29fac457fe5349a74381b318052fcde7c0fcf023c5bd6e46a10781bfd088,PodSandboxId:3f4584deecaa1c4780d6ee2e02956b78f4a117b40535c9969d9d9335e1e99d31,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c
5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737992455211894273,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d680bc60d5cca33ec9f8efe12b9436,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc849e4b3b43a9fbc33bd0fd3b94e30d93cf7c08b80fb850ddae862b55c6dcb0,PodSandboxId:85164c03763f5b6fdce8435fa0ad8d24c9054004c3c83c5c93d0509d823b9f42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf9
59e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737992455200211816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7502284b645e9731b2637454c4d938bd,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd0d1c3baca46d1381efaf76d0bceef8026a0bb3ada0122eb3da35eb7a3fbfd9,PodSandboxId:adb18c56418037be4ea02c2d9679ee229c490eb2329b39d7ca3dc635ee9bb518,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c0
4427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737992165639545259,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e54d60609ac672b809551b4a150ba47f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92f1b915-fe90-4c51-8a42-7ae83811a3b7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.630843327Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab527e32-9694-40ae-b7ba-13e1db1e292d name=/runtime.v1.RuntimeService/Version
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.630960763Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab527e32-9694-40ae-b7ba-13e1db1e292d name=/runtime.v1.RuntimeService/Version
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.632824536Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e21d445d-8253-4eba-a95f-04b1d1253bdb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.633265217Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993757633244880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e21d445d-8253-4eba-a95f-04b1d1253bdb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.634073482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bebf4a89-a1fb-4879-a4a7-076451d1b6a6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.634130386Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bebf4a89-a1fb-4879-a4a7-076451d1b6a6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:02:37 default-k8s-diff-port-912913 crio[723]: time="2025-01-27 16:02:37.634371991Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47ff28a231090f63655d3122ded0bbdad0f09de4110c17ceae9b6fe691b209cd,PodSandboxId:ddbb436bdb987c129e605487093ec61f759e5f35321ae9c047ed3a2b4da21cb6,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737993755519553958,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-gzr4w,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 0c584a6a-9023-474f-9e0f-0ba4d8090c46,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0460e13ef77c3eba96a71dca4ba70a75c1d1f88bd8c05714527fb6104f4c05f8,PodSandboxId:f6d395ff52cdfe99a1b7b7032f01762178ef7bb3bfaa594e411489b2af0267a6,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737992475581524547,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-skt2c,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: 329dd03f-b15a-4bd8-8a4a-c9863d805c3e,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb4bf968db758bf19100be33d8cbc4108d2497298d5a246e3324d80ae9f6879,PodSandboxId:19a196d0fad3970632f49f93ca671e7ec20012bbca428c016cf0f126c20390e1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992469637859880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9b
c-zw9rm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c29a853d-5146-4641-a434-d85147dc3b16,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64287426e1ef5a24cdd83f184bdd2b16d8eecfbbddb429984eaa8b8c1c12b3ee,PodSandboxId:70839deb23b8b60090d28931452787e2f732b1a9fc8e73593d2bd8013cc33dbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956
d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737992469609567922,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-8rzrt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e346ae-cc28-4f80-9424-c4d97ac8106c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fd6634e6f14d98d5b48ac2db81744a8e22c3f304263952c1dfc70bbd50fe05c,PodSandboxId:d1f5ed0ee5436f1bfa22f2e01748c4e5e3d774cc8b63040581e5b4fad3a35ad7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0
,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737992468853175031,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k85rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8da8dc48-3019-4fa6-b5c4-58b0b41aefc0,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea138f2df1b59ee780d3f0c46d60f9f98a734da772da4bb6abfd6ea0d99722a,PodSandboxId:f024442c6b393068d598f89882b0afa7a3986e975dc1092a7c832936bfdde69b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Imag
e:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737992468624239689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fa7b229-cd7d-4aa4-9cee-26a1c5714b3c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c30afb93f8e3eb5e6e6fd67676dad9fa14ec82dad71bb480c03110d2d5f6e0,PodSandboxId:a308e093c3f51bc66786f55f433b569af9039c486df7de361f47341ef3c8e44c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd4
4cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737992455253696805,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e54d60609ac672b809551b4a150ba47f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16370673f2ee1b11649b864c69d1de8c47173694c6d8ea92bc0dbe910779ccb8,PodSandboxId:97feb3a29ddb489aec54941ebc3cc8468c57dc291e345d6941677de7d191c706,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f59
0b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737992455215880294,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0296e2e67880a24538c37b98004a9e02,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56db29fac457fe5349a74381b318052fcde7c0fcf023c5bd6e46a10781bfd088,PodSandboxId:3f4584deecaa1c4780d6ee2e02956b78f4a117b40535c9969d9d9335e1e99d31,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c
5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737992455211894273,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d680bc60d5cca33ec9f8efe12b9436,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc849e4b3b43a9fbc33bd0fd3b94e30d93cf7c08b80fb850ddae862b55c6dcb0,PodSandboxId:85164c03763f5b6fdce8435fa0ad8d24c9054004c3c83c5c93d0509d823b9f42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf9
59e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737992455200211816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7502284b645e9731b2637454c4d938bd,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd0d1c3baca46d1381efaf76d0bceef8026a0bb3ada0122eb3da35eb7a3fbfd9,PodSandboxId:adb18c56418037be4ea02c2d9679ee229c490eb2329b39d7ca3dc635ee9bb518,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c0
4427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737992165639545259,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-912913,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e54d60609ac672b809551b4a150ba47f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bebf4a89-a1fb-4879-a4a7-076451d1b6a6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	47ff28a231090       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           2 seconds ago       Exited              dashboard-metrics-scraper   9                   ddbb436bdb987       dashboard-metrics-scraper-86c6bf9756-gzr4w
	0460e13ef77c3       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   21 minutes ago      Running             kubernetes-dashboard        0                   f6d395ff52cdf       kubernetes-dashboard-7779f9b69b-skt2c
	0bb4bf968db75       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   19a196d0fad39       coredns-668d6bf9bc-zw9rm
	64287426e1ef5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   70839deb23b8b       coredns-668d6bf9bc-8rzrt
	6fd6634e6f14d       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                           21 minutes ago      Running             kube-proxy                  0                   d1f5ed0ee5436       kube-proxy-k85rn
	4ea138f2df1b5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 minutes ago      Running             storage-provisioner         0                   f024442c6b393       storage-provisioner
	11c30afb93f8e       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           21 minutes ago      Running             kube-apiserver              2                   a308e093c3f51       kube-apiserver-default-k8s-diff-port-912913
	16370673f2ee1       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                           21 minutes ago      Running             kube-scheduler              2                   97feb3a29ddb4       kube-scheduler-default-k8s-diff-port-912913
	56db29fac457f       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           21 minutes ago      Running             etcd                        2                   3f4584deecaa1       etcd-default-k8s-diff-port-912913
	fc849e4b3b43a       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                           21 minutes ago      Running             kube-controller-manager     2                   85164c03763f5       kube-controller-manager-default-k8s-diff-port-912913
	dd0d1c3baca46       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           26 minutes ago      Exited              kube-apiserver              1                   adb18c5641803       kube-apiserver-default-k8s-diff-port-912913
	
	
	==> coredns [0bb4bf968db758bf19100be33d8cbc4108d2497298d5a246e3324d80ae9f6879] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [64287426e1ef5a24cdd83f184bdd2b16d8eecfbbddb429984eaa8b8c1c12b3ee] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-912913
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-912913
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743
	                    minikube.k8s.io/name=default-k8s-diff-port-912913
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T15_41_02_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 15:40:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-912913
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 16:02:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 15:58:41 +0000   Mon, 27 Jan 2025 15:40:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 15:58:41 +0000   Mon, 27 Jan 2025 15:40:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 15:58:41 +0000   Mon, 27 Jan 2025 15:40:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 15:58:41 +0000   Mon, 27 Jan 2025 15:40:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.160
	  Hostname:    default-k8s-diff-port-912913
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bca035a2eb87406d818dbacdef6ca03b
	  System UUID:                bca035a2-eb87-406d-818d-bacdef6ca03b
	  Boot ID:                    1b67ea20-dd5e-4475-a0d3-6d895b949023
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-8rzrt                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-zw9rm                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-912913                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-912913             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-912913    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-k85rn                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-912913             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-rtx6b                          100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-gzr4w              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-skt2c                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-912913 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-912913 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-912913 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-912913 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-912913 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-912913 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-912913 event: Registered Node default-k8s-diff-port-912913 in Controller
	
	
	==> dmesg <==
	[  +0.052432] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.347620] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.938773] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.634032] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.893951] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.064969] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053075] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.174114] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.139512] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.275961] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[Jan27 15:36] systemd-fstab-generator[805]: Ignoring "noauto" option for root device
	[  +0.063815] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.312044] systemd-fstab-generator[930]: Ignoring "noauto" option for root device
	[  +4.632517] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.021562] kauditd_printk_skb: 90 callbacks suppressed
	[Jan27 15:40] kauditd_printk_skb: 3 callbacks suppressed
	[  +2.393530] systemd-fstab-generator[2680]: Ignoring "noauto" option for root device
	[  +4.582070] kauditd_printk_skb: 58 callbacks suppressed
	[Jan27 15:41] systemd-fstab-generator[3015]: Ignoring "noauto" option for root device
	[  +4.892158] systemd-fstab-generator[3125]: Ignoring "noauto" option for root device
	[  +0.152861] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.043924] kauditd_printk_skb: 106 callbacks suppressed
	[ +24.339856] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [56db29fac457fe5349a74381b318052fcde7c0fcf023c5bd6e46a10781bfd088] <==
	{"level":"info","ts":"2025-01-27T15:40:59.778246Z","caller":"traceutil/trace.go:171","msg":"trace[940192280] transaction","detail":"{read_only:false; response_revision:141; number_of_response:1; }","duration":"116.935218ms","start":"2025-01-27T15:40:59.661288Z","end":"2025-01-27T15:40:59.778223Z","steps":["trace[940192280] 'process raft request'  (duration: 90.87954ms)","trace[940192280] 'compare'  (duration: 25.929468ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T15:41:14.529917Z","caller":"traceutil/trace.go:171","msg":"trace[1607442483] linearizableReadLoop","detail":"{readStateIndex:533; appliedIndex:532; }","duration":"230.023092ms","start":"2025-01-27T15:41:14.299873Z","end":"2025-01-27T15:41:14.529896Z","steps":["trace[1607442483] 'read index received'  (duration: 229.874588ms)","trace[1607442483] 'applied index is now lower than readState.Index'  (duration: 147.763µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T15:41:14.530379Z","caller":"traceutil/trace.go:171","msg":"trace[665214005] transaction","detail":"{read_only:false; response_revision:520; number_of_response:1; }","duration":"355.900495ms","start":"2025-01-27T15:41:14.174429Z","end":"2025-01-27T15:41:14.530329Z","steps":["trace[665214005] 'process raft request'  (duration: 355.365885ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T15:41:14.530538Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.606081ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-912913\" limit:1 ","response":"range_response_count:1 size:5834"}
	{"level":"info","ts":"2025-01-27T15:41:14.530660Z","caller":"traceutil/trace.go:171","msg":"trace[2003903860] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-912913; range_end:; response_count:1; response_revision:520; }","duration":"230.708778ms","start":"2025-01-27T15:41:14.299849Z","end":"2025-01-27T15:41:14.530557Z","steps":["trace[2003903860] 'agreement among raft nodes before linearized reading'  (duration: 230.580975ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T15:41:14.530996Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T15:41:14.174347Z","time spent":"356.101924ms","remote":"127.0.0.1:59306","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5819,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-912913\" mod_revision:299 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-912913\" value_size:5751 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-912913\" > >"}
	{"level":"info","ts":"2025-01-27T15:41:14.697379Z","caller":"traceutil/trace.go:171","msg":"trace[214917308] transaction","detail":"{read_only:false; response_revision:521; number_of_response:1; }","duration":"142.612857ms","start":"2025-01-27T15:41:14.554750Z","end":"2025-01-27T15:41:14.697363Z","steps":["trace[214917308] 'process raft request'  (duration: 136.996555ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T15:41:19.066099Z","caller":"traceutil/trace.go:171","msg":"trace[364536375] linearizableReadLoop","detail":"{readStateIndex:549; appliedIndex:548; }","duration":"262.717235ms","start":"2025-01-27T15:41:18.803353Z","end":"2025-01-27T15:41:19.066070Z","steps":["trace[364536375] 'read index received'  (duration: 262.521917ms)","trace[364536375] 'applied index is now lower than readState.Index'  (duration: 192.462µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T15:41:19.066653Z","caller":"traceutil/trace.go:171","msg":"trace[210916] transaction","detail":"{read_only:false; response_revision:535; number_of_response:1; }","duration":"270.66007ms","start":"2025-01-27T15:41:18.795977Z","end":"2025-01-27T15:41:19.066637Z","steps":["trace[210916] 'process raft request'  (duration: 269.948819ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T15:41:19.066902Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.49609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T15:41:19.066935Z","caller":"traceutil/trace.go:171","msg":"trace[1483635123] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:535; }","duration":"263.604485ms","start":"2025-01-27T15:41:18.803322Z","end":"2025-01-27T15:41:19.066926Z","steps":["trace[1483635123] 'agreement among raft nodes before linearized reading'  (duration: 263.499915ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T15:41:19.067083Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.172175ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T15:41:19.067102Z","caller":"traceutil/trace.go:171","msg":"trace[1376878097] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:535; }","duration":"166.226352ms","start":"2025-01-27T15:41:18.900869Z","end":"2025-01-27T15:41:19.067096Z","steps":["trace[1376878097] 'agreement among raft nodes before linearized reading'  (duration: 166.181906ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T15:50:56.353833Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":865}
	{"level":"info","ts":"2025-01-27T15:50:56.394945Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":865,"took":"40.018197ms","hash":3158444853,"current-db-size-bytes":2945024,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2945024,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2025-01-27T15:50:56.395042Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3158444853,"revision":865,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T15:55:56.362509Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1118}
	{"level":"info","ts":"2025-01-27T15:55:56.366744Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1118,"took":"3.763221ms","hash":3302858548,"current-db-size-bytes":2945024,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1761280,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T15:55:56.366821Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3302858548,"revision":1118,"compact-revision":865}
	{"level":"info","ts":"2025-01-27T16:00:56.371358Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1369}
	{"level":"info","ts":"2025-01-27T16:00:56.376666Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1369,"took":"4.759412ms","hash":1070017693,"current-db-size-bytes":2945024,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1773568,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T16:00:56.376746Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1070017693,"revision":1369,"compact-revision":1118}
	{"level":"info","ts":"2025-01-27T16:01:50.067393Z","caller":"traceutil/trace.go:171","msg":"trace[1902761506] transaction","detail":"{read_only:false; response_revision:1665; number_of_response:1; }","duration":"231.400953ms","start":"2025-01-27T16:01:49.835916Z","end":"2025-01-27T16:01:50.067316Z","steps":["trace[1902761506] 'process raft request'  (duration: 231.250318ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T16:01:50.888218Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.348582ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10888741450166946170 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.160\" mod_revision:1658 > success:<request_put:<key:\"/registry/masterleases/192.168.39.160\" value_size:67 lease:1665369413312170359 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.160\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-27T16:01:50.888726Z","caller":"traceutil/trace.go:171","msg":"trace[888293344] transaction","detail":"{read_only:false; response_revision:1666; number_of_response:1; }","duration":"253.525926ms","start":"2025-01-27T16:01:50.635183Z","end":"2025-01-27T16:01:50.888709Z","steps":["trace[888293344] 'process raft request'  (duration: 124.227927ms)","trace[888293344] 'compare'  (duration: 128.148745ms)"],"step_count":2}
	
	
	==> kernel <==
	 16:02:38 up 26 min,  0 users,  load average: 0.16, 0.22, 0.19
	Linux default-k8s-diff-port-912913 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [11c30afb93f8e3eb5e6e6fd67676dad9fa14ec82dad71bb480c03110d2d5f6e0] <==
	I0127 15:58:59.148077       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 15:58:59.148117       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 16:00:58.145874       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 16:00:58.146334       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 16:00:59.148842       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 16:00:59.148946       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 16:00:59.148863       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 16:00:59.149019       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0127 16:00:59.150124       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 16:00:59.150123       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 16:01:59.150797       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 16:01:59.151049       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 16:01:59.150880       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 16:01:59.151228       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 16:01:59.152392       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 16:01:59.152460       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [dd0d1c3baca46d1381efaf76d0bceef8026a0bb3ada0122eb3da35eb7a3fbfd9] <==
	W0127 15:40:45.922988       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:45.983318       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:46.017294       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:46.090915       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:46.176331       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:46.190026       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:46.190116       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:46.240309       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:46.332020       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:46.810097       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:49.776341       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:50.442378       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:50.460236       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:50.477043       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:50.487046       1 logging.go:55] [core] [Channel #8 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:50.536725       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:50.594978       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:50.681957       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:50.719825       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:50.883513       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:51.014525       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:51.090497       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:51.108441       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:51.277534       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 15:40:51.281217       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [fc849e4b3b43a9fbc33bd0fd3b94e30d93cf7c08b80fb850ddae862b55c6dcb0] <==
	I0127 15:57:35.877208       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 15:58:05.824683       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 15:58:05.885821       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 15:58:35.831349       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 15:58:35.893986       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 15:58:41.898446       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-912913"
	E0127 15:59:05.839017       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 15:59:05.903972       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 15:59:35.844997       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 15:59:35.912824       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 16:00:05.853956       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 16:00:05.921756       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 16:00:35.860719       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 16:00:35.929400       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 16:01:05.868171       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 16:01:05.939071       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 16:01:35.876172       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 16:01:35.949040       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 16:02:05.884756       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 16:02:05.959646       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 16:02:22.522265       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="413.902µs"
	I0127 16:02:34.516774       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="140.06µs"
	E0127 16:02:35.891446       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 16:02:35.967555       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 16:02:36.137301       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="82.939µs"
	
	
	==> kube-proxy [6fd6634e6f14d98d5b48ac2db81744a8e22c3f304263952c1dfc70bbd50fe05c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 15:41:09.680786       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 15:41:09.795987       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.160"]
	E0127 15:41:09.796127       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 15:41:10.016562       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 15:41:10.016680       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 15:41:10.016704       1 server_linux.go:170] "Using iptables Proxier"
	I0127 15:41:10.019880       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 15:41:10.020408       1 server.go:497] "Version info" version="v1.32.1"
	I0127 15:41:10.020454       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 15:41:10.022850       1 config.go:199] "Starting service config controller"
	I0127 15:41:10.022991       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 15:41:10.023143       1 config.go:105] "Starting endpoint slice config controller"
	I0127 15:41:10.023205       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 15:41:10.024460       1 config.go:329] "Starting node config controller"
	I0127 15:41:10.024492       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 15:41:10.123365       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 15:41:10.123424       1 shared_informer.go:320] Caches are synced for service config
	I0127 15:41:10.124960       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [16370673f2ee1b11649b864c69d1de8c47173694c6d8ea92bc0dbe910779ccb8] <==
	W0127 15:40:59.271777       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 15:40:59.272009       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 15:40:59.277414       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 15:40:59.277536       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:59.319273       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 15:40:59.319385       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:59.339504       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 15:40:59.339686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:59.388807       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 15:40:59.389096       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:59.426062       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 15:40:59.426162       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:59.448634       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 15:40:59.448669       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:59.490352       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 15:40:59.490513       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:59.618048       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 15:40:59.618225       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:59.622441       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 15:40:59.622512       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:59.649308       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 15:40:59.649381       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 15:40:59.695325       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0127 15:40:59.695565       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 15:41:00.915263       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 16:02:01 default-k8s-diff-port-912913 kubelet[3022]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 16:02:01 default-k8s-diff-port-912913 kubelet[3022]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 16:02:01 default-k8s-diff-port-912913 kubelet[3022]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 16:02:01 default-k8s-diff-port-912913 kubelet[3022]: E0127 16:02:01.916622    3022 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993721915699234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:02:01 default-k8s-diff-port-912913 kubelet[3022]: E0127 16:02:01.916680    3022 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993721915699234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:02:10 default-k8s-diff-port-912913 kubelet[3022]: E0127 16:02:10.510836    3022 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 16:02:10 default-k8s-diff-port-912913 kubelet[3022]: E0127 16:02:10.510939    3022 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 16:02:10 default-k8s-diff-port-912913 kubelet[3022]: E0127 16:02:10.511158    3022 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qz8bw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-rtx6b_kube-system(aed61473-0cc8-4459-9153-5c42e5a10b2d): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 27 16:02:10 default-k8s-diff-port-912913 kubelet[3022]: E0127 16:02:10.512661    3022 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-rtx6b" podUID="aed61473-0cc8-4459-9153-5c42e5a10b2d"
	Jan 27 16:02:11 default-k8s-diff-port-912913 kubelet[3022]: I0127 16:02:11.498039    3022 scope.go:117] "RemoveContainer" containerID="98535a983203af0033f68ddabdc6f8d0e3af11c9fdcd1211dbea323873957dc1"
	Jan 27 16:02:11 default-k8s-diff-port-912913 kubelet[3022]: E0127 16:02:11.499884    3022 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-gzr4w_kubernetes-dashboard(0c584a6a-9023-474f-9e0f-0ba4d8090c46)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-gzr4w" podUID="0c584a6a-9023-474f-9e0f-0ba4d8090c46"
	Jan 27 16:02:11 default-k8s-diff-port-912913 kubelet[3022]: E0127 16:02:11.919888    3022 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993731919169503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:02:11 default-k8s-diff-port-912913 kubelet[3022]: E0127 16:02:11.920316    3022 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993731919169503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:02:21 default-k8s-diff-port-912913 kubelet[3022]: E0127 16:02:21.923624    3022 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993741922875411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:02:21 default-k8s-diff-port-912913 kubelet[3022]: E0127 16:02:21.923691    3022 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993741922875411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:02:22 default-k8s-diff-port-912913 kubelet[3022]: I0127 16:02:22.498908    3022 scope.go:117] "RemoveContainer" containerID="98535a983203af0033f68ddabdc6f8d0e3af11c9fdcd1211dbea323873957dc1"
	Jan 27 16:02:22 default-k8s-diff-port-912913 kubelet[3022]: E0127 16:02:22.499229    3022 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-gzr4w_kubernetes-dashboard(0c584a6a-9023-474f-9e0f-0ba4d8090c46)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-gzr4w" podUID="0c584a6a-9023-474f-9e0f-0ba4d8090c46"
	Jan 27 16:02:22 default-k8s-diff-port-912913 kubelet[3022]: E0127 16:02:22.501131    3022 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-rtx6b" podUID="aed61473-0cc8-4459-9153-5c42e5a10b2d"
	Jan 27 16:02:31 default-k8s-diff-port-912913 kubelet[3022]: E0127 16:02:31.925812    3022 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993751925316558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:02:31 default-k8s-diff-port-912913 kubelet[3022]: E0127 16:02:31.926218    3022 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993751925316558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 16:02:34 default-k8s-diff-port-912913 kubelet[3022]: E0127 16:02:34.498845    3022 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-rtx6b" podUID="aed61473-0cc8-4459-9153-5c42e5a10b2d"
	Jan 27 16:02:35 default-k8s-diff-port-912913 kubelet[3022]: I0127 16:02:35.497050    3022 scope.go:117] "RemoveContainer" containerID="98535a983203af0033f68ddabdc6f8d0e3af11c9fdcd1211dbea323873957dc1"
	Jan 27 16:02:36 default-k8s-diff-port-912913 kubelet[3022]: I0127 16:02:36.111923    3022 scope.go:117] "RemoveContainer" containerID="98535a983203af0033f68ddabdc6f8d0e3af11c9fdcd1211dbea323873957dc1"
	Jan 27 16:02:36 default-k8s-diff-port-912913 kubelet[3022]: I0127 16:02:36.112231    3022 scope.go:117] "RemoveContainer" containerID="47ff28a231090f63655d3122ded0bbdad0f09de4110c17ceae9b6fe691b209cd"
	Jan 27 16:02:36 default-k8s-diff-port-912913 kubelet[3022]: E0127 16:02:36.112358    3022 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-gzr4w_kubernetes-dashboard(0c584a6a-9023-474f-9e0f-0ba4d8090c46)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-gzr4w" podUID="0c584a6a-9023-474f-9e0f-0ba4d8090c46"
	
	
	==> kubernetes-dashboard [0460e13ef77c3eba96a71dca4ba70a75c1d1f88bd8c05714527fb6104f4c05f8] <==
	2025/01/27 15:50:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:50:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:51:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:51:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:52:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:52:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:53:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:53:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:54:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:54:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:55:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:55:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:56:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:56:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:57:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:57:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:58:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:58:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:59:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 15:59:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 16:00:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 16:00:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 16:01:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 16:01:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 16:02:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4ea138f2df1b59ee780d3f0c46d60f9f98a734da772da4bb6abfd6ea0d99722a] <==
	I0127 15:41:08.872371       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 15:41:08.890894       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 15:41:08.890961       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 15:41:08.913291       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 15:41:08.913482       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-912913_5b274ad7-1cf4-4b4d-9217-45909ff9b2dd!
	I0127 15:41:08.914511       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46f6f05c-dc92-44a0-9bdf-b2b3af218a1d", APIVersion:"v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-912913_5b274ad7-1cf4-4b4d-9217-45909ff9b2dd became leader
	I0127 15:41:09.015812       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-912913_5b274ad7-1cf4-4b4d-9217-45909ff9b2dd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-912913 -n default-k8s-diff-port-912913
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-912913 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-rtx6b
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-912913 describe pod metrics-server-f79f97bbb-rtx6b
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-912913 describe pod metrics-server-f79f97bbb-rtx6b: exit status 1 (66.49197ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-rtx6b" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-912913 describe pod metrics-server-f79f97bbb-rtx6b: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1626.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-405706 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-405706 create -f testdata/busybox.yaml: exit status 1 (51.42283ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-405706" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-405706 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405706 -n old-k8s-version-405706
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405706 -n old-k8s-version-405706: exit status 6 (260.389173ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 15:36:01.817473 1075426 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-405706" does not appear in /home/jenkins/minikube-integration/20321-1005652/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-405706" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405706 -n old-k8s-version-405706
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405706 -n old-k8s-version-405706: exit status 6 (242.528931ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 15:36:02.061433 1075472 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-405706" does not appear in /home/jenkins/minikube-integration/20321-1005652/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-405706" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (110.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-405706 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0127 15:36:08.725906 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:08.732360 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:08.743862 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:08.765394 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:08.806856 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:08.888375 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:09.050557 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:09.372222 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:10.014261 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:11.296405 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:13.858273 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:18.980544 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:19.905397 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:29.222701 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:38.923574 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:46.261426 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:46.267858 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:46.279329 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:46.300788 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:46.342293 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:46.423819 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:46.585310 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:46.907085 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:47.548979 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:48.830867 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:49.704179 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:51.392919 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:36:56.514772 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:00.867424 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:06.756804 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:07.512637 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:07.519072 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:07.530356 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:07.551814 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:07.593338 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:07.674874 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:07.836431 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:08.158196 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:08.800432 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:10.082120 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:12.644248 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:16.327953 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:17.766102 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:27.239385 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:28.008266 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:30.666067 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:45.219921 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:37:48.490345 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-405706 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m50.541059049s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-405706 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-405706 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-405706 describe deploy/metrics-server -n kube-system: exit status 1 (46.627881ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-405706" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-405706 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405706 -n old-k8s-version-405706
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405706 -n old-k8s-version-405706: exit status 6 (235.312125ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 15:37:52.885684 1075916 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-405706" does not appear in /home/jenkins/minikube-integration/20321-1005652/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-405706" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (110.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (511.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-405706 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0127 15:38:00.845408 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:38:08.201607 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:38:12.922006 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:38:16.947301 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:38:22.789175 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:38:29.452492 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:38:52.587698 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:39:06.238467 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:39:30.123879 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:39:32.465508 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:39:51.374409 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:40:00.169736 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:40:16.985574 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:40:38.927876 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:40:44.687597 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:41:06.630958 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:41:08.726288 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:41:36.429144 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:41:46.261025 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:42:07.513056 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:42:13.965680 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:42:35.217270 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:42:45.220750 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:43:16.947367 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:44:06.238332 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:44:32.465260 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:44:40.017434 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:45:16.985643 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:45:38.928167 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:46:08.726815 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-405706 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m28.615498909s)

                                                
                                                
-- stdout --
	* [old-k8s-version-405706] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20321
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-405706" primary control-plane node in "old-k8s-version-405706" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-405706" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 15:37:58.460225 1076050 out.go:345] Setting OutFile to fd 1 ...
	I0127 15:37:58.460642 1076050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:37:58.460654 1076050 out.go:358] Setting ErrFile to fd 2...
	I0127 15:37:58.460661 1076050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:37:58.461077 1076050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 15:37:58.462086 1076050 out.go:352] Setting JSON to false
	I0127 15:37:58.463486 1076050 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":22825,"bootTime":1737969453,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 15:37:58.463630 1076050 start.go:139] virtualization: kvm guest
	I0127 15:37:58.465774 1076050 out.go:177] * [old-k8s-version-405706] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 15:37:58.467019 1076050 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 15:37:58.467027 1076050 notify.go:220] Checking for updates...
	I0127 15:37:58.469366 1076050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 15:37:58.470862 1076050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:37:58.472239 1076050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:37:58.473602 1076050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 15:37:58.474992 1076050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 15:37:58.477098 1076050 config.go:182] Loaded profile config "old-k8s-version-405706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 15:37:58.477731 1076050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:37:58.477799 1076050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:37:58.494965 1076050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44031
	I0127 15:37:58.495385 1076050 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:37:58.495879 1076050 main.go:141] libmachine: Using API Version  1
	I0127 15:37:58.495901 1076050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:37:58.496287 1076050 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:37:58.496581 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:37:58.498539 1076050 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 15:37:58.499766 1076050 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 15:37:58.500092 1076050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:37:58.500132 1076050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:37:58.516530 1076050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38979
	I0127 15:37:58.517083 1076050 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:37:58.517634 1076050 main.go:141] libmachine: Using API Version  1
	I0127 15:37:58.517666 1076050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:37:58.518105 1076050 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:37:58.518356 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:37:58.558744 1076050 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 15:37:58.560294 1076050 start.go:297] selected driver: kvm2
	I0127 15:37:58.560309 1076050 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-4
05706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:37:58.560451 1076050 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 15:37:58.561175 1076050 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:37:58.561284 1076050 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 15:37:58.579056 1076050 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 15:37:58.579656 1076050 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 15:37:58.579710 1076050 cni.go:84] Creating CNI manager for ""
	I0127 15:37:58.579776 1076050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:37:58.579842 1076050 start.go:340] cluster config:
	{Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:37:58.580020 1076050 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:37:58.581716 1076050 out.go:177] * Starting "old-k8s-version-405706" primary control-plane node in "old-k8s-version-405706" cluster
	I0127 15:37:58.582897 1076050 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 15:37:58.582967 1076050 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 15:37:58.582980 1076050 cache.go:56] Caching tarball of preloaded images
	I0127 15:37:58.583091 1076050 preload.go:172] Found /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 15:37:58.583107 1076050 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 15:37:58.583235 1076050 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/config.json ...
	I0127 15:37:58.583561 1076050 start.go:360] acquireMachinesLock for old-k8s-version-405706: {Name:mk884e36253ca066a698970989f20649e5f9cbef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 15:37:58.583628 1076050 start.go:364] duration metric: took 38.743µs to acquireMachinesLock for "old-k8s-version-405706"
	I0127 15:37:58.583652 1076050 start.go:96] Skipping create...Using existing machine configuration
	I0127 15:37:58.583664 1076050 fix.go:54] fixHost starting: 
	I0127 15:37:58.584041 1076050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:37:58.584088 1076050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:37:58.599995 1076050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I0127 15:37:58.600476 1076050 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:37:58.600955 1076050 main.go:141] libmachine: Using API Version  1
	I0127 15:37:58.600978 1076050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:37:58.601364 1076050 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:37:58.601600 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:37:58.601761 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetState
	I0127 15:37:58.603539 1076050 fix.go:112] recreateIfNeeded on old-k8s-version-405706: state=Stopped err=<nil>
	I0127 15:37:58.603586 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	W0127 15:37:58.603763 1076050 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 15:37:58.606243 1076050 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-405706" ...
	I0127 15:37:58.607570 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .Start
	I0127 15:37:58.607751 1076050 main.go:141] libmachine: (old-k8s-version-405706) starting domain...
	I0127 15:37:58.607775 1076050 main.go:141] libmachine: (old-k8s-version-405706) ensuring networks are active...
	I0127 15:37:58.608545 1076050 main.go:141] libmachine: (old-k8s-version-405706) Ensuring network default is active
	I0127 15:37:58.608940 1076050 main.go:141] libmachine: (old-k8s-version-405706) Ensuring network mk-old-k8s-version-405706 is active
	I0127 15:37:58.609360 1076050 main.go:141] libmachine: (old-k8s-version-405706) getting domain XML...
	I0127 15:37:58.610094 1076050 main.go:141] libmachine: (old-k8s-version-405706) creating domain...
	I0127 15:37:59.916140 1076050 main.go:141] libmachine: (old-k8s-version-405706) waiting for IP...
	I0127 15:37:59.917074 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:37:59.917644 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:37:59.917771 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:37:59.917639 1076085 retry.go:31] will retry after 260.191068ms: waiting for domain to come up
	I0127 15:38:00.180221 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:00.180922 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:00.180948 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:00.180879 1076085 retry.go:31] will retry after 359.566395ms: waiting for domain to come up
	I0127 15:38:00.542429 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:00.543056 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:00.543097 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:00.542942 1076085 retry.go:31] will retry after 454.555688ms: waiting for domain to come up
	I0127 15:38:00.999387 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:00.999926 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:00.999963 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:00.999888 1076085 retry.go:31] will retry after 559.246215ms: waiting for domain to come up
	I0127 15:38:01.560836 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:01.561528 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:01.561554 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:01.561489 1076085 retry.go:31] will retry after 552.626147ms: waiting for domain to come up
	I0127 15:38:02.116418 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:02.116873 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:02.116914 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:02.116852 1076085 retry.go:31] will retry after 808.293412ms: waiting for domain to come up
	I0127 15:38:02.927177 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:02.927742 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:02.927794 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:02.927707 1076085 retry.go:31] will retry after 740.958034ms: waiting for domain to come up
	I0127 15:38:03.670221 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:03.670746 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:03.670778 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:03.670698 1076085 retry.go:31] will retry after 1.365040284s: waiting for domain to come up
	I0127 15:38:05.038371 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:05.039049 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:05.039084 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:05.039001 1076085 retry.go:31] will retry after 1.410803026s: waiting for domain to come up
	I0127 15:38:06.451661 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:06.452329 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:06.452353 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:06.452303 1076085 retry.go:31] will retry after 1.899894945s: waiting for domain to come up
	I0127 15:38:08.354209 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:08.354816 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:08.354843 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:08.354774 1076085 retry.go:31] will retry after 2.020609979s: waiting for domain to come up
	I0127 15:38:10.377713 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:10.378246 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:10.378288 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:10.378203 1076085 retry.go:31] will retry after 2.469378968s: waiting for domain to come up
	I0127 15:38:12.850116 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:12.850624 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:12.850678 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:12.850598 1076085 retry.go:31] will retry after 4.322374162s: waiting for domain to come up
	I0127 15:38:17.175528 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.176129 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has current primary IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.176161 1076050 main.go:141] libmachine: (old-k8s-version-405706) found domain IP: 192.168.72.49
	I0127 15:38:17.176174 1076050 main.go:141] libmachine: (old-k8s-version-405706) reserving static IP address...
	I0127 15:38:17.176643 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "old-k8s-version-405706", mac: "52:54:00:c3:d6:50", ip: "192.168.72.49"} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.176678 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | skip adding static IP to network mk-old-k8s-version-405706 - found existing host DHCP lease matching {name: "old-k8s-version-405706", mac: "52:54:00:c3:d6:50", ip: "192.168.72.49"}
	I0127 15:38:17.176696 1076050 main.go:141] libmachine: (old-k8s-version-405706) reserved static IP address 192.168.72.49 for domain old-k8s-version-405706
	I0127 15:38:17.176711 1076050 main.go:141] libmachine: (old-k8s-version-405706) waiting for SSH...
	I0127 15:38:17.176725 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | Getting to WaitForSSH function...
	I0127 15:38:17.179302 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.179688 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.179730 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.179875 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | Using SSH client type: external
	I0127 15:38:17.179902 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | Using SSH private key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa (-rw-------)
	I0127 15:38:17.179949 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 15:38:17.179964 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | About to run SSH command:
	I0127 15:38:17.179977 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | exit 0
	I0127 15:38:17.309257 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | SSH cmd err, output: <nil>: 
	I0127 15:38:17.309663 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetConfigRaw
	I0127 15:38:17.310369 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:38:17.313129 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.313573 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.313604 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.313898 1076050 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/config.json ...
	I0127 15:38:17.314149 1076050 machine.go:93] provisionDockerMachine start ...
	I0127 15:38:17.314178 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:17.314424 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.317176 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.317563 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.317591 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.317822 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:17.318108 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.318299 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.318460 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:17.318635 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:17.318853 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:17.318864 1076050 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 15:38:17.433866 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 15:38:17.433903 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetMachineName
	I0127 15:38:17.434143 1076050 buildroot.go:166] provisioning hostname "old-k8s-version-405706"
	I0127 15:38:17.434203 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetMachineName
	I0127 15:38:17.434415 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.437023 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.437426 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.437473 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.437592 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:17.437754 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.437908 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.438061 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:17.438217 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:17.438406 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:17.438418 1076050 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-405706 && echo "old-k8s-version-405706" | sudo tee /etc/hostname
	I0127 15:38:17.569398 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-405706
	
	I0127 15:38:17.569429 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.572466 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.572839 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.572882 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.573066 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:17.573312 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.573557 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.573726 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:17.573924 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:17.574106 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:17.574123 1076050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-405706' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-405706/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-405706' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 15:38:17.705253 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 15:38:17.705300 1076050 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20321-1005652/.minikube CaCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20321-1005652/.minikube}
	I0127 15:38:17.705320 1076050 buildroot.go:174] setting up certificates
	I0127 15:38:17.705333 1076050 provision.go:84] configureAuth start
	I0127 15:38:17.705346 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetMachineName
	I0127 15:38:17.705683 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:38:17.708834 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.709332 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.709361 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.709583 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.712195 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.712714 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.712755 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.712924 1076050 provision.go:143] copyHostCerts
	I0127 15:38:17.712990 1076050 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem, removing ...
	I0127 15:38:17.713017 1076050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem
	I0127 15:38:17.713095 1076050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem (1078 bytes)
	I0127 15:38:17.713241 1076050 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem, removing ...
	I0127 15:38:17.713259 1076050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem
	I0127 15:38:17.713326 1076050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem (1123 bytes)
	I0127 15:38:17.713446 1076050 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem, removing ...
	I0127 15:38:17.713460 1076050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem
	I0127 15:38:17.713500 1076050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem (1679 bytes)
	I0127 15:38:17.713572 1076050 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-405706 san=[127.0.0.1 192.168.72.49 localhost minikube old-k8s-version-405706]
	I0127 15:38:17.976673 1076050 provision.go:177] copyRemoteCerts
	I0127 15:38:17.976750 1076050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 15:38:17.976777 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.979513 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.979876 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.979909 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.980065 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:17.980267 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.980415 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:17.980554 1076050 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:38:18.068921 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 15:38:18.098428 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 15:38:18.126079 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 15:38:18.152193 1076050 provision.go:87] duration metric: took 446.842204ms to configureAuth
	I0127 15:38:18.152233 1076050 buildroot.go:189] setting minikube options for container-runtime
	I0127 15:38:18.152508 1076050 config.go:182] Loaded profile config "old-k8s-version-405706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 15:38:18.152613 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.155796 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.156222 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.156254 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.156368 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.156577 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.156774 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.156938 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.157163 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:18.157375 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:18.157392 1076050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 15:38:18.414989 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 15:38:18.415023 1076050 machine.go:96] duration metric: took 1.100855468s to provisionDockerMachine
	I0127 15:38:18.415039 1076050 start.go:293] postStartSetup for "old-k8s-version-405706" (driver="kvm2")
	I0127 15:38:18.415054 1076050 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 15:38:18.415078 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.415462 1076050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 15:38:18.415499 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.418353 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.418778 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.418818 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.418925 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.419129 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.419322 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.419440 1076050 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:38:18.508389 1076050 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 15:38:18.513026 1076050 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 15:38:18.513065 1076050 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/addons for local assets ...
	I0127 15:38:18.513137 1076050 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/files for local assets ...
	I0127 15:38:18.513210 1076050 filesync.go:149] local asset: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem -> 10128162.pem in /etc/ssl/certs
	I0127 15:38:18.513309 1076050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 15:38:18.523553 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:38:18.550472 1076050 start.go:296] duration metric: took 135.415525ms for postStartSetup
	I0127 15:38:18.550553 1076050 fix.go:56] duration metric: took 19.966860382s for fixHost
	I0127 15:38:18.550584 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.553490 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.553896 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.553956 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.554089 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.554297 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.554458 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.554585 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.554806 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:18.555042 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:18.555058 1076050 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 15:38:18.670326 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737992298.641469796
	
	I0127 15:38:18.670351 1076050 fix.go:216] guest clock: 1737992298.641469796
	I0127 15:38:18.670358 1076050 fix.go:229] Guest: 2025-01-27 15:38:18.641469796 +0000 UTC Remote: 2025-01-27 15:38:18.550560739 +0000 UTC m=+20.130793423 (delta=90.909057ms)
	I0127 15:38:18.670379 1076050 fix.go:200] guest clock delta is within tolerance: 90.909057ms
	I0127 15:38:18.670384 1076050 start.go:83] releasing machines lock for "old-k8s-version-405706", held for 20.08674208s
	I0127 15:38:18.670400 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.670689 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:38:18.673557 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.673931 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.673967 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.674112 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.674583 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.674751 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.674869 1076050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 15:38:18.674916 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.674944 1076050 ssh_runner.go:195] Run: cat /version.json
	I0127 15:38:18.674975 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.677875 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.678255 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.678395 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.678427 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.678595 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.678749 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.678783 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.678819 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.679001 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.679093 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.679181 1076050 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:38:18.679243 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.681217 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.681729 1076050 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:38:18.787808 1076050 ssh_runner.go:195] Run: systemctl --version
	I0127 15:38:18.794834 1076050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 15:38:18.943494 1076050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 15:38:18.950152 1076050 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 15:38:18.950269 1076050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 15:38:18.967110 1076050 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 15:38:18.967141 1076050 start.go:495] detecting cgroup driver to use...
	I0127 15:38:18.967215 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 15:38:18.985631 1076050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 15:38:19.002007 1076050 docker.go:217] disabling cri-docker service (if available) ...
	I0127 15:38:19.002098 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 15:38:19.015975 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 15:38:19.030630 1076050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 15:38:19.167900 1076050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 15:38:19.339595 1076050 docker.go:233] disabling docker service ...
	I0127 15:38:19.339680 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 15:38:19.355894 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 15:38:19.370010 1076050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 15:38:19.503289 1076050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 15:38:19.640006 1076050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 15:38:19.656134 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 15:38:19.676136 1076050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 15:38:19.676207 1076050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:38:19.688127 1076050 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 15:38:19.688235 1076050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:38:19.700866 1076050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:38:19.712387 1076050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:38:19.724833 1076050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 15:38:19.736825 1076050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 15:38:19.747906 1076050 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 15:38:19.747976 1076050 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 15:38:19.761744 1076050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 15:38:19.771558 1076050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:38:19.891616 1076050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 15:38:19.987396 1076050 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 15:38:19.987496 1076050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 15:38:19.993148 1076050 start.go:563] Will wait 60s for crictl version
	I0127 15:38:19.993218 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:19.997232 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 15:38:20.047289 1076050 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 15:38:20.047381 1076050 ssh_runner.go:195] Run: crio --version
	I0127 15:38:20.080844 1076050 ssh_runner.go:195] Run: crio --version
	I0127 15:38:20.113498 1076050 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 15:38:20.115011 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:38:20.118087 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:20.118526 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:20.118554 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:20.118911 1076050 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 15:38:20.123918 1076050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:38:20.137420 1076050 kubeadm.go:883] updating cluster {Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 15:38:20.137608 1076050 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 15:38:20.137679 1076050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:38:20.203088 1076050 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 15:38:20.203162 1076050 ssh_runner.go:195] Run: which lz4
	I0127 15:38:20.207834 1076050 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 15:38:20.212511 1076050 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 15:38:20.212550 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 15:38:21.944361 1076050 crio.go:462] duration metric: took 1.736570115s to copy over tarball
	I0127 15:38:21.944459 1076050 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 15:38:25.017812 1076050 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.073312095s)
	I0127 15:38:25.017848 1076050 crio.go:469] duration metric: took 3.07344607s to extract the tarball
	I0127 15:38:25.017859 1076050 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 15:38:25.068609 1076050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:38:25.107660 1076050 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 15:38:25.107705 1076050 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 15:38:25.107797 1076050 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.107831 1076050 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.107843 1076050 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 15:38:25.107782 1076050 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:38:25.107866 1076050 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.107793 1076050 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.107810 1076050 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.107872 1076050 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.109711 1076050 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.109716 1076050 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.109736 1076050 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:38:25.109749 1076050 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 15:38:25.109765 1076050 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.109711 1076050 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.109717 1076050 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.109721 1076050 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.319866 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 15:38:25.320854 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.329418 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.331454 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.331999 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.338125 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.346119 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.438398 1076050 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 15:38:25.438508 1076050 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 15:38:25.438596 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.485875 1076050 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 15:38:25.485939 1076050 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.486002 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.524177 1076050 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 15:38:25.524230 1076050 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.524284 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.533972 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:38:25.537150 1076050 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 15:38:25.537198 1076050 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.537239 1076050 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 15:38:25.537282 1076050 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.537306 1076050 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 15:38:25.537329 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.537256 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.537388 1076050 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 15:38:25.537334 1076050 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.537413 1076050 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.537430 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.537437 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 15:38:25.537438 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.537484 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.537505 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.730245 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.730334 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.730438 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.730438 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 15:38:25.730510 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.730615 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.730667 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.896539 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.896835 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.896864 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 15:38:25.896869 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.896952 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.896990 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.897080 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:26.067159 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 15:38:26.067203 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:26.067293 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 15:38:26.078064 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:26.078128 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 15:38:26.078233 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:26.078345 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 15:38:26.172870 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 15:38:26.172975 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 15:38:26.177848 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 15:38:26.177943 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 15:38:26.177981 1076050 cache_images.go:92] duration metric: took 1.070258879s to LoadCachedImages
	W0127 15:38:26.178068 1076050 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0127 15:38:26.178082 1076050 kubeadm.go:934] updating node { 192.168.72.49 8443 v1.20.0 crio true true} ...
	I0127 15:38:26.178211 1076050 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-405706 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 15:38:26.178294 1076050 ssh_runner.go:195] Run: crio config
	I0127 15:38:26.228357 1076050 cni.go:84] Creating CNI manager for ""
	I0127 15:38:26.228379 1076050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:38:26.228388 1076050 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 15:38:26.228409 1076050 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.49 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-405706 NodeName:old-k8s-version-405706 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 15:38:26.228568 1076050 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-405706"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 15:38:26.228657 1076050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 15:38:26.240731 1076050 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 15:38:26.240809 1076050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 15:38:26.251662 1076050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0127 15:38:26.270153 1076050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 15:38:26.292045 1076050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0127 15:38:26.312171 1076050 ssh_runner.go:195] Run: grep 192.168.72.49	control-plane.minikube.internal$ /etc/hosts
	I0127 15:38:26.316436 1076050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:38:26.330437 1076050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:38:26.453879 1076050 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:38:26.473364 1076050 certs.go:68] Setting up /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706 for IP: 192.168.72.49
	I0127 15:38:26.473395 1076050 certs.go:194] generating shared ca certs ...
	I0127 15:38:26.473419 1076050 certs.go:226] acquiring lock for ca certs: {Name:mk0f815247e4bef2bde3ddcae95639d1cd9cab24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:38:26.473672 1076050 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key
	I0127 15:38:26.473739 1076050 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key
	I0127 15:38:26.473755 1076050 certs.go:256] generating profile certs ...
	I0127 15:38:26.473909 1076050 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.key
	I0127 15:38:26.473993 1076050 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.key.8816e362
	I0127 15:38:26.474047 1076050 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.key
	I0127 15:38:26.474215 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem (1338 bytes)
	W0127 15:38:26.474262 1076050 certs.go:480] ignoring /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816_empty.pem, impossibly tiny 0 bytes
	I0127 15:38:26.474272 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 15:38:26.474304 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem (1078 bytes)
	I0127 15:38:26.474335 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem (1123 bytes)
	I0127 15:38:26.474377 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem (1679 bytes)
	I0127 15:38:26.474434 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:38:26.475310 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 15:38:26.528151 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 15:38:26.569116 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 15:38:26.612791 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 15:38:26.643362 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 15:38:26.682611 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 15:38:26.736411 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 15:38:26.766171 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 15:38:26.806820 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /usr/share/ca-certificates/10128162.pem (1708 bytes)
	I0127 15:38:26.835935 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 15:38:26.862752 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem --> /usr/share/ca-certificates/1012816.pem (1338 bytes)
	I0127 15:38:26.890713 1076050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 15:38:26.910713 1076050 ssh_runner.go:195] Run: openssl version
	I0127 15:38:26.917762 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1012816.pem && ln -fs /usr/share/ca-certificates/1012816.pem /etc/ssl/certs/1012816.pem"
	I0127 15:38:26.930093 1076050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1012816.pem
	I0127 15:38:26.935103 1076050 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 14:20 /usr/share/ca-certificates/1012816.pem
	I0127 15:38:26.935187 1076050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1012816.pem
	I0127 15:38:26.941655 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1012816.pem /etc/ssl/certs/51391683.0"
	I0127 15:38:26.955281 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10128162.pem && ln -fs /usr/share/ca-certificates/10128162.pem /etc/ssl/certs/10128162.pem"
	I0127 15:38:26.969095 1076050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10128162.pem
	I0127 15:38:26.974104 1076050 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 14:20 /usr/share/ca-certificates/10128162.pem
	I0127 15:38:26.974177 1076050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10128162.pem
	I0127 15:38:26.980428 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10128162.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 15:38:26.992636 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 15:38:27.006632 1076050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:38:27.011797 1076050 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 14:06 /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:38:27.011873 1076050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:38:27.018384 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 15:38:27.032120 1076050 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 15:38:27.037441 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 15:38:27.044020 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 15:38:27.050856 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 15:38:27.057896 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 15:38:27.065183 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 15:38:27.072632 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 15:38:27.079504 1076050 kubeadm.go:392] StartCluster: {Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:38:27.079605 1076050 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 15:38:27.079670 1076050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:38:27.122961 1076050 cri.go:89] found id: ""
	I0127 15:38:27.123034 1076050 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 15:38:27.134170 1076050 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 15:38:27.134194 1076050 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 15:38:27.134254 1076050 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 15:38:27.146526 1076050 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 15:38:27.147269 1076050 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-405706" does not appear in /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:38:27.147608 1076050 kubeconfig.go:62] /home/jenkins/minikube-integration/20321-1005652/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-405706" cluster setting kubeconfig missing "old-k8s-version-405706" context setting]
	I0127 15:38:27.148175 1076050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:38:27.218301 1076050 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 15:38:27.230797 1076050 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.49
	I0127 15:38:27.230842 1076050 kubeadm.go:1160] stopping kube-system containers ...
	I0127 15:38:27.230858 1076050 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 15:38:27.230918 1076050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:38:27.273845 1076050 cri.go:89] found id: ""
	I0127 15:38:27.273935 1076050 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 15:38:27.295864 1076050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:38:27.308596 1076050 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:38:27.308616 1076050 kubeadm.go:157] found existing configuration files:
	
	I0127 15:38:27.308663 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:38:27.319955 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:38:27.320015 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:38:27.331528 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:38:27.342177 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:38:27.342248 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:38:27.352666 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:38:27.364010 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:38:27.364077 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:38:27.375886 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:38:27.386069 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:38:27.386141 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:38:27.398977 1076050 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:38:27.410085 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:27.579462 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:28.350228 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:28.604472 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:28.715137 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:28.812566 1076050 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:38:28.812663 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:29.312952 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:29.812784 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:30.313395 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:30.813525 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:31.313773 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:31.813137 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:32.313501 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:32.813028 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:33.312894 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:33.813345 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:34.313510 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:34.813678 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:35.313121 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:35.813541 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:36.312890 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:36.813411 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:37.313228 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:37.813599 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:38.313526 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:38.812744 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:39.313501 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:39.813568 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:40.313585 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:40.813078 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:41.312734 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:41.812823 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:42.312829 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:42.813108 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:43.312983 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:43.813614 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:44.313522 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:44.813162 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:45.313000 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:45.813166 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:46.313147 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:46.812791 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:47.312810 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:47.812775 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:48.313432 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:48.813154 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:49.312838 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:49.813340 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:50.312925 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:50.813287 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:51.312785 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:51.813687 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:52.313111 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:52.812802 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:53.313097 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:53.813587 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:54.313181 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:54.812993 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:55.313464 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:55.813050 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:56.312920 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:56.813705 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:57.313622 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:57.812842 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:58.313381 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:58.812816 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:59.312817 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:59.813035 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:00.313444 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:00.813287 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:01.312763 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:01.813721 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:02.313131 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:02.813297 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:03.313697 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:03.813314 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:04.313147 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:04.813585 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:05.313388 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:05.813722 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:06.313190 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:06.812942 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:07.313516 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:07.813321 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:08.313684 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:08.813457 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:09.312972 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:09.812986 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:10.313838 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:10.813128 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:11.312866 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:11.812982 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:12.312768 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:12.813426 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:13.313370 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:13.812803 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:14.313174 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:14.813162 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:15.312724 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:15.813166 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:16.313662 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:16.813497 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:17.313422 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:17.813587 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:18.313749 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:18.813301 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:19.313610 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:19.813293 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:20.313667 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:20.813161 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:21.313709 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:21.813699 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:22.313185 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:22.813328 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:23.313612 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:23.812846 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:24.313129 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:24.813728 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:25.313735 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:25.813439 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:26.313406 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:26.813597 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:27.313484 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:27.813672 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:28.313161 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:28.813541 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:28.813633 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:28.855334 1076050 cri.go:89] found id: ""
	I0127 15:39:28.855368 1076050 logs.go:282] 0 containers: []
	W0127 15:39:28.855376 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:28.855383 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:28.855466 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:28.892923 1076050 cri.go:89] found id: ""
	I0127 15:39:28.892959 1076050 logs.go:282] 0 containers: []
	W0127 15:39:28.892972 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:28.892980 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:28.893081 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:28.942133 1076050 cri.go:89] found id: ""
	I0127 15:39:28.942163 1076050 logs.go:282] 0 containers: []
	W0127 15:39:28.942187 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:28.942196 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:28.942261 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:28.980950 1076050 cri.go:89] found id: ""
	I0127 15:39:28.980978 1076050 logs.go:282] 0 containers: []
	W0127 15:39:28.980988 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:28.980995 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:28.981080 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:29.022166 1076050 cri.go:89] found id: ""
	I0127 15:39:29.022200 1076050 logs.go:282] 0 containers: []
	W0127 15:39:29.022209 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:29.022215 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:29.022269 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:29.060408 1076050 cri.go:89] found id: ""
	I0127 15:39:29.060439 1076050 logs.go:282] 0 containers: []
	W0127 15:39:29.060447 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:29.060454 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:29.060521 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:29.100890 1076050 cri.go:89] found id: ""
	I0127 15:39:29.100924 1076050 logs.go:282] 0 containers: []
	W0127 15:39:29.100935 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:29.100944 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:29.101075 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:29.139688 1076050 cri.go:89] found id: ""
	I0127 15:39:29.139720 1076050 logs.go:282] 0 containers: []
	W0127 15:39:29.139729 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:29.139741 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:29.139752 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:29.181255 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:29.181288 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:29.232218 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:29.232260 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:29.245853 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:29.245881 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:29.382461 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:29.382487 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:29.382501 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:31.957162 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:31.971225 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:31.971290 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:32.026501 1076050 cri.go:89] found id: ""
	I0127 15:39:32.026535 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.026546 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:32.026555 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:32.026624 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:32.066192 1076050 cri.go:89] found id: ""
	I0127 15:39:32.066232 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.066244 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:32.066253 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:32.066334 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:32.106017 1076050 cri.go:89] found id: ""
	I0127 15:39:32.106047 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.106056 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:32.106062 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:32.106130 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:32.146534 1076050 cri.go:89] found id: ""
	I0127 15:39:32.146565 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.146575 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:32.146581 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:32.146644 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:32.186982 1076050 cri.go:89] found id: ""
	I0127 15:39:32.187007 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.187016 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:32.187022 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:32.187077 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:32.229657 1076050 cri.go:89] found id: ""
	I0127 15:39:32.229685 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.229693 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:32.229700 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:32.229756 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:32.267228 1076050 cri.go:89] found id: ""
	I0127 15:39:32.267259 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.267268 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:32.267275 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:32.267340 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:32.305366 1076050 cri.go:89] found id: ""
	I0127 15:39:32.305394 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.305402 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:32.305412 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:32.305424 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:32.345293 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:32.345335 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:32.395863 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:32.395922 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:32.411092 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:32.411133 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:32.493214 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:32.493248 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:32.493266 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:35.077133 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:35.094000 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:35.094095 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:35.132448 1076050 cri.go:89] found id: ""
	I0127 15:39:35.132488 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.132500 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:35.132508 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:35.132583 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:35.167599 1076050 cri.go:89] found id: ""
	I0127 15:39:35.167632 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.167644 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:35.167653 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:35.167713 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:35.204383 1076050 cri.go:89] found id: ""
	I0127 15:39:35.204429 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.204438 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:35.204444 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:35.204503 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:35.241382 1076050 cri.go:89] found id: ""
	I0127 15:39:35.241411 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.241423 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:35.241431 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:35.241500 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:35.278253 1076050 cri.go:89] found id: ""
	I0127 15:39:35.278280 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.278289 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:35.278296 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:35.278357 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:35.320389 1076050 cri.go:89] found id: ""
	I0127 15:39:35.320418 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.320425 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:35.320432 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:35.320498 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:35.360563 1076050 cri.go:89] found id: ""
	I0127 15:39:35.360592 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.360604 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:35.360613 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:35.360670 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:35.396537 1076050 cri.go:89] found id: ""
	I0127 15:39:35.396580 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.396593 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:35.396609 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:35.396628 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:35.474518 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:35.474554 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:35.474575 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:35.554396 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:35.554445 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:35.599042 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:35.599100 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:35.652578 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:35.652619 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:38.167582 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:38.182164 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:38.182250 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:38.218993 1076050 cri.go:89] found id: ""
	I0127 15:39:38.219025 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.219034 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:38.219040 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:38.219121 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:38.257547 1076050 cri.go:89] found id: ""
	I0127 15:39:38.257575 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.257584 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:38.257590 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:38.257643 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:38.295251 1076050 cri.go:89] found id: ""
	I0127 15:39:38.295287 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.295299 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:38.295307 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:38.295378 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:38.339567 1076050 cri.go:89] found id: ""
	I0127 15:39:38.339605 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.339621 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:38.339629 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:38.339697 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:38.375969 1076050 cri.go:89] found id: ""
	I0127 15:39:38.376007 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.376019 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:38.376028 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:38.376097 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:38.429385 1076050 cri.go:89] found id: ""
	I0127 15:39:38.429416 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.429427 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:38.429435 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:38.429503 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:38.481564 1076050 cri.go:89] found id: ""
	I0127 15:39:38.481604 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.481618 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:38.481627 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:38.481700 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:38.535177 1076050 cri.go:89] found id: ""
	I0127 15:39:38.535203 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.535211 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:38.535223 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:38.535238 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:38.549306 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:38.549349 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:38.622573 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:38.622607 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:38.622625 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:38.697323 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:38.697363 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:38.738950 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:38.738981 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:41.298384 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:41.312088 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:41.312162 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:41.349779 1076050 cri.go:89] found id: ""
	I0127 15:39:41.349808 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.349817 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:41.349824 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:41.349887 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:41.387675 1076050 cri.go:89] found id: ""
	I0127 15:39:41.387715 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.387732 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:41.387740 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:41.387797 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:41.424135 1076050 cri.go:89] found id: ""
	I0127 15:39:41.424166 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.424175 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:41.424181 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:41.424246 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:41.464733 1076050 cri.go:89] found id: ""
	I0127 15:39:41.464760 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.464768 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:41.464774 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:41.464835 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:41.506669 1076050 cri.go:89] found id: ""
	I0127 15:39:41.506700 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.506713 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:41.506725 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:41.506793 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:41.548804 1076050 cri.go:89] found id: ""
	I0127 15:39:41.548833 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.548842 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:41.548848 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:41.548911 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:41.590203 1076050 cri.go:89] found id: ""
	I0127 15:39:41.590233 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.590245 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:41.590253 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:41.590318 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:41.625407 1076050 cri.go:89] found id: ""
	I0127 15:39:41.625434 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.625442 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:41.625452 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:41.625466 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:41.702765 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:41.702808 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:41.745622 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:41.745662 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:41.799894 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:41.799943 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:41.814151 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:41.814180 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:41.899042 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:44.399328 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:44.420663 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:44.420731 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:44.484562 1076050 cri.go:89] found id: ""
	I0127 15:39:44.484595 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.484606 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:44.484616 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:44.484681 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:44.555635 1076050 cri.go:89] found id: ""
	I0127 15:39:44.555663 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.555672 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:44.555678 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:44.555730 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:44.598564 1076050 cri.go:89] found id: ""
	I0127 15:39:44.598592 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.598600 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:44.598606 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:44.598663 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:44.639072 1076050 cri.go:89] found id: ""
	I0127 15:39:44.639115 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.639126 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:44.639134 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:44.639200 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:44.677620 1076050 cri.go:89] found id: ""
	I0127 15:39:44.677652 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.677662 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:44.677670 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:44.677730 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:44.714227 1076050 cri.go:89] found id: ""
	I0127 15:39:44.714263 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.714273 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:44.714281 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:44.714357 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:44.753864 1076050 cri.go:89] found id: ""
	I0127 15:39:44.753898 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.753911 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:44.753919 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:44.753987 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:44.790576 1076050 cri.go:89] found id: ""
	I0127 15:39:44.790603 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.790613 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:44.790625 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:44.790641 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:44.864427 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:44.864468 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:44.904955 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:44.904989 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:44.959074 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:44.959137 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:44.976053 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:44.976082 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:45.062578 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:47.562901 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:47.576665 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:47.576751 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:47.615806 1076050 cri.go:89] found id: ""
	I0127 15:39:47.615842 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.615855 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:47.615864 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:47.615936 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:47.651913 1076050 cri.go:89] found id: ""
	I0127 15:39:47.651947 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.651966 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:47.651974 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:47.652045 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:47.688572 1076050 cri.go:89] found id: ""
	I0127 15:39:47.688604 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.688614 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:47.688620 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:47.688680 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:47.726688 1076050 cri.go:89] found id: ""
	I0127 15:39:47.726725 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.726737 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:47.726745 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:47.726815 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:47.768385 1076050 cri.go:89] found id: ""
	I0127 15:39:47.768413 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.768424 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:47.768433 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:47.768493 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:47.806575 1076050 cri.go:89] found id: ""
	I0127 15:39:47.806601 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.806609 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:47.806615 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:47.806668 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:47.843234 1076050 cri.go:89] found id: ""
	I0127 15:39:47.843259 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.843267 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:47.843273 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:47.843325 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:47.882360 1076050 cri.go:89] found id: ""
	I0127 15:39:47.882398 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.882411 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:47.882426 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:47.882445 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:47.936678 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:47.936721 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:47.951861 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:47.951889 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:48.027451 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:48.027479 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:48.027497 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:48.110314 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:48.110362 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:50.653993 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:50.668077 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:50.668150 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:50.708132 1076050 cri.go:89] found id: ""
	I0127 15:39:50.708160 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.708168 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:50.708175 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:50.708244 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:50.748371 1076050 cri.go:89] found id: ""
	I0127 15:39:50.748400 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.748409 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:50.748415 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:50.748471 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:50.785148 1076050 cri.go:89] found id: ""
	I0127 15:39:50.785183 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.785194 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:50.785202 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:50.785267 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:50.820827 1076050 cri.go:89] found id: ""
	I0127 15:39:50.820864 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.820874 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:50.820881 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:50.820948 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:50.859566 1076050 cri.go:89] found id: ""
	I0127 15:39:50.859602 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.859615 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:50.859623 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:50.859699 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:50.896227 1076050 cri.go:89] found id: ""
	I0127 15:39:50.896263 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.896276 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:50.896285 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:50.896352 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:50.933357 1076050 cri.go:89] found id: ""
	I0127 15:39:50.933393 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.933405 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:50.933414 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:50.933478 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:50.968264 1076050 cri.go:89] found id: ""
	I0127 15:39:50.968303 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.968313 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:50.968324 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:50.968338 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:51.026708 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:51.026754 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:51.041436 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:51.041475 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:51.110945 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:51.110967 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:51.110980 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:51.192815 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:51.192858 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:53.737031 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:53.751175 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:53.751266 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:53.793720 1076050 cri.go:89] found id: ""
	I0127 15:39:53.793748 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.793757 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:53.793764 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:53.793822 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:53.832993 1076050 cri.go:89] found id: ""
	I0127 15:39:53.833065 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.833074 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:53.833080 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:53.833139 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:53.872089 1076050 cri.go:89] found id: ""
	I0127 15:39:53.872122 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.872133 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:53.872147 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:53.872205 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:53.914262 1076050 cri.go:89] found id: ""
	I0127 15:39:53.914298 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.914311 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:53.914321 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:53.914400 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:53.954035 1076050 cri.go:89] found id: ""
	I0127 15:39:53.954073 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.954085 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:53.954093 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:53.954158 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:53.994248 1076050 cri.go:89] found id: ""
	I0127 15:39:53.994306 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.994320 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:53.994329 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:53.994407 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:54.031811 1076050 cri.go:89] found id: ""
	I0127 15:39:54.031836 1076050 logs.go:282] 0 containers: []
	W0127 15:39:54.031847 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:54.031855 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:54.031917 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:54.070159 1076050 cri.go:89] found id: ""
	I0127 15:39:54.070199 1076050 logs.go:282] 0 containers: []
	W0127 15:39:54.070212 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:54.070225 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:54.070242 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:54.112540 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:54.112575 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:54.163657 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:54.163710 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:54.178720 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:54.178757 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:54.255558 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:54.255596 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:54.255613 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:56.834676 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:56.848186 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:56.848265 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:56.885958 1076050 cri.go:89] found id: ""
	I0127 15:39:56.885984 1076050 logs.go:282] 0 containers: []
	W0127 15:39:56.885993 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:56.885999 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:56.886050 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:56.925195 1076050 cri.go:89] found id: ""
	I0127 15:39:56.925233 1076050 logs.go:282] 0 containers: []
	W0127 15:39:56.925247 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:56.925256 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:56.925328 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:56.967597 1076050 cri.go:89] found id: ""
	I0127 15:39:56.967631 1076050 logs.go:282] 0 containers: []
	W0127 15:39:56.967644 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:56.967654 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:56.967719 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:57.005973 1076050 cri.go:89] found id: ""
	I0127 15:39:57.006008 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.006021 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:57.006029 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:57.006104 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:57.042547 1076050 cri.go:89] found id: ""
	I0127 15:39:57.042581 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.042593 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:57.042601 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:57.042664 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:57.084492 1076050 cri.go:89] found id: ""
	I0127 15:39:57.084517 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.084525 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:57.084531 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:57.084581 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:57.120954 1076050 cri.go:89] found id: ""
	I0127 15:39:57.120988 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.121032 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:57.121039 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:57.121100 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:57.159620 1076050 cri.go:89] found id: ""
	I0127 15:39:57.159657 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.159668 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:57.159681 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:57.159696 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:57.203209 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:57.203245 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:57.253929 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:57.253972 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:57.268430 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:57.268463 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:57.338716 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:57.338741 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:57.338760 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:59.918299 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:59.933577 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:59.933650 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:59.971396 1076050 cri.go:89] found id: ""
	I0127 15:39:59.971437 1076050 logs.go:282] 0 containers: []
	W0127 15:39:59.971449 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:59.971457 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:59.971516 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:00.012852 1076050 cri.go:89] found id: ""
	I0127 15:40:00.012890 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.012902 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:00.012910 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:00.012983 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:00.053636 1076050 cri.go:89] found id: ""
	I0127 15:40:00.053673 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.053685 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:00.053693 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:00.053757 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:00.091584 1076050 cri.go:89] found id: ""
	I0127 15:40:00.091615 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.091626 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:00.091634 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:00.091698 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:00.126906 1076050 cri.go:89] found id: ""
	I0127 15:40:00.126936 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.126945 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:00.126957 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:00.127012 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:00.164308 1076050 cri.go:89] found id: ""
	I0127 15:40:00.164345 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.164354 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:00.164360 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:00.164412 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:00.200695 1076050 cri.go:89] found id: ""
	I0127 15:40:00.200727 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.200739 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:00.200750 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:00.200807 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:00.239910 1076050 cri.go:89] found id: ""
	I0127 15:40:00.239938 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.239947 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:00.239958 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:00.239970 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:00.255441 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:00.255468 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:00.333737 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:00.333767 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:00.333782 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:00.417199 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:00.417256 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:00.461683 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:00.461711 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:03.016318 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:03.033626 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:03.033707 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:03.070895 1076050 cri.go:89] found id: ""
	I0127 15:40:03.070929 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.070940 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:03.070948 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:03.071011 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:03.107691 1076050 cri.go:89] found id: ""
	I0127 15:40:03.107725 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.107736 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:03.107742 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:03.107806 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:03.144989 1076050 cri.go:89] found id: ""
	I0127 15:40:03.145032 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.145044 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:03.145052 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:03.145106 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:03.182441 1076050 cri.go:89] found id: ""
	I0127 15:40:03.182473 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.182482 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:03.182488 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:03.182540 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:03.220251 1076050 cri.go:89] found id: ""
	I0127 15:40:03.220286 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.220298 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:03.220306 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:03.220366 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:03.258761 1076050 cri.go:89] found id: ""
	I0127 15:40:03.258799 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.258810 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:03.258818 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:03.258888 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:03.307236 1076050 cri.go:89] found id: ""
	I0127 15:40:03.307274 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.307283 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:03.307289 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:03.307352 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:03.354451 1076050 cri.go:89] found id: ""
	I0127 15:40:03.354487 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.354498 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:03.354509 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:03.354524 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:03.405369 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:03.405412 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:03.420837 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:03.420866 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:03.496384 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:03.496420 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:03.496435 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:03.576992 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:03.577066 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:06.128185 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:06.142266 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:06.142381 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:06.181053 1076050 cri.go:89] found id: ""
	I0127 15:40:06.181087 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.181097 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:06.181106 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:06.181162 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:06.218206 1076050 cri.go:89] found id: ""
	I0127 15:40:06.218236 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.218245 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:06.218251 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:06.218304 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:06.255094 1076050 cri.go:89] found id: ""
	I0127 15:40:06.255138 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.255158 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:06.255165 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:06.255221 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:06.295564 1076050 cri.go:89] found id: ""
	I0127 15:40:06.295598 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.295611 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:06.295620 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:06.295683 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:06.332518 1076050 cri.go:89] found id: ""
	I0127 15:40:06.332552 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.332561 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:06.332568 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:06.332641 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:06.371503 1076050 cri.go:89] found id: ""
	I0127 15:40:06.371532 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.371540 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:06.371547 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:06.371599 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:06.409091 1076050 cri.go:89] found id: ""
	I0127 15:40:06.409119 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.409128 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:06.409135 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:06.409192 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:06.445033 1076050 cri.go:89] found id: ""
	I0127 15:40:06.445078 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.445092 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:06.445113 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:06.445132 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:06.460284 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:06.460321 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:06.543807 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:06.543831 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:06.543844 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:06.626884 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:06.626929 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:06.670309 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:06.670350 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:09.219752 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:09.234460 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:09.234537 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:09.271526 1076050 cri.go:89] found id: ""
	I0127 15:40:09.271574 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.271584 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:09.271590 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:09.271661 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:09.312643 1076050 cri.go:89] found id: ""
	I0127 15:40:09.312681 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.312696 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:09.312705 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:09.312771 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:09.351697 1076050 cri.go:89] found id: ""
	I0127 15:40:09.351736 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.351749 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:09.351757 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:09.351825 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:09.390289 1076050 cri.go:89] found id: ""
	I0127 15:40:09.390315 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.390324 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:09.390332 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:09.390400 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:09.431515 1076050 cri.go:89] found id: ""
	I0127 15:40:09.431548 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.431559 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:09.431567 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:09.431634 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:09.473134 1076050 cri.go:89] found id: ""
	I0127 15:40:09.473170 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.473182 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:09.473190 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:09.473261 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:09.516505 1076050 cri.go:89] found id: ""
	I0127 15:40:09.516542 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.516556 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:09.516564 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:09.516634 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:09.560596 1076050 cri.go:89] found id: ""
	I0127 15:40:09.560638 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.560649 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:09.560662 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:09.560678 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:09.616174 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:09.616219 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:09.631586 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:09.631622 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:09.706642 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:09.706677 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:09.706696 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:09.780834 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:09.780883 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:12.323632 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:12.337043 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:12.337121 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:12.371851 1076050 cri.go:89] found id: ""
	I0127 15:40:12.371875 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.371884 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:12.371891 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:12.371963 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:12.409962 1076050 cri.go:89] found id: ""
	I0127 15:40:12.409997 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.410010 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:12.410018 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:12.410095 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:12.445440 1076050 cri.go:89] found id: ""
	I0127 15:40:12.445473 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.445482 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:12.445489 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:12.445544 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:12.481239 1076050 cri.go:89] found id: ""
	I0127 15:40:12.481270 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.481282 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:12.481303 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:12.481372 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:12.520832 1076050 cri.go:89] found id: ""
	I0127 15:40:12.520859 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.520867 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:12.520873 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:12.520923 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:12.559781 1076050 cri.go:89] found id: ""
	I0127 15:40:12.559818 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.559829 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:12.559838 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:12.559901 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:12.597821 1076050 cri.go:89] found id: ""
	I0127 15:40:12.597861 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.597873 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:12.597882 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:12.597944 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:12.635939 1076050 cri.go:89] found id: ""
	I0127 15:40:12.635974 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.635986 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:12.635998 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:12.636013 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:12.709126 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:12.709150 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:12.709163 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:12.792573 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:12.792617 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:12.832327 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:12.832368 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:12.884984 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:12.885039 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:15.401225 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:15.415906 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:15.415993 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:15.457989 1076050 cri.go:89] found id: ""
	I0127 15:40:15.458021 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.458031 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:15.458038 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:15.458100 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:15.493789 1076050 cri.go:89] found id: ""
	I0127 15:40:15.493836 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.493852 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:15.493860 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:15.493927 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:15.535193 1076050 cri.go:89] found id: ""
	I0127 15:40:15.535219 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.535227 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:15.535233 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:15.535298 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:15.574983 1076050 cri.go:89] found id: ""
	I0127 15:40:15.575016 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.575030 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:15.575036 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:15.575107 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:15.613038 1076050 cri.go:89] found id: ""
	I0127 15:40:15.613072 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.613083 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:15.613091 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:15.613166 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:15.651439 1076050 cri.go:89] found id: ""
	I0127 15:40:15.651473 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.651483 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:15.651489 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:15.651559 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:15.697895 1076050 cri.go:89] found id: ""
	I0127 15:40:15.697933 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.697945 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:15.697953 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:15.698026 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:15.736368 1076050 cri.go:89] found id: ""
	I0127 15:40:15.736397 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.736405 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:15.736416 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:15.736431 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:15.788954 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:15.789002 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:15.803162 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:15.803193 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:15.878504 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:15.878538 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:15.878557 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:15.955134 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:15.955186 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:18.497724 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:18.519382 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:18.519463 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:18.556458 1076050 cri.go:89] found id: ""
	I0127 15:40:18.556495 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.556504 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:18.556511 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:18.556566 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:18.593672 1076050 cri.go:89] found id: ""
	I0127 15:40:18.593700 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.593717 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:18.593726 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:18.593794 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:18.632353 1076050 cri.go:89] found id: ""
	I0127 15:40:18.632393 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.632404 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:18.632412 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:18.632467 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:18.668613 1076050 cri.go:89] found id: ""
	I0127 15:40:18.668647 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.668659 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:18.668668 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:18.668738 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:18.706751 1076050 cri.go:89] found id: ""
	I0127 15:40:18.706786 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.706798 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:18.706806 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:18.706872 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:18.745670 1076050 cri.go:89] found id: ""
	I0127 15:40:18.745706 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.745719 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:18.745728 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:18.745798 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:18.783666 1076050 cri.go:89] found id: ""
	I0127 15:40:18.783696 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.783708 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:18.783716 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:18.783783 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:18.821591 1076050 cri.go:89] found id: ""
	I0127 15:40:18.821626 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.821637 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:18.821652 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:18.821669 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:18.895554 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:18.895582 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:18.895600 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:18.977366 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:18.977416 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:19.020341 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:19.020374 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:19.073493 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:19.073537 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:21.589182 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:21.607125 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:21.607245 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:21.654887 1076050 cri.go:89] found id: ""
	I0127 15:40:21.654922 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.654933 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:21.654942 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:21.655013 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:21.703233 1076050 cri.go:89] found id: ""
	I0127 15:40:21.703279 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.703289 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:21.703298 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:21.703440 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:21.744227 1076050 cri.go:89] found id: ""
	I0127 15:40:21.744260 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.744273 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:21.744286 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:21.744356 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:21.786397 1076050 cri.go:89] found id: ""
	I0127 15:40:21.786430 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.786445 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:21.786454 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:21.786517 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:21.831934 1076050 cri.go:89] found id: ""
	I0127 15:40:21.831963 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.831974 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:21.831980 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:21.832036 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:21.877230 1076050 cri.go:89] found id: ""
	I0127 15:40:21.877264 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.877275 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:21.877283 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:21.877351 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:21.923993 1076050 cri.go:89] found id: ""
	I0127 15:40:21.924026 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.924038 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:21.924047 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:21.924109 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:21.963890 1076050 cri.go:89] found id: ""
	I0127 15:40:21.963922 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.963931 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:21.963942 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:21.963958 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:22.010706 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:22.010743 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:22.070053 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:22.070096 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:22.085574 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:22.085604 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:22.163198 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:22.163228 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:22.163245 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:24.747046 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:24.761103 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:24.761194 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:24.806570 1076050 cri.go:89] found id: ""
	I0127 15:40:24.806659 1076050 logs.go:282] 0 containers: []
	W0127 15:40:24.806679 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:24.806689 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:24.806755 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:24.854651 1076050 cri.go:89] found id: ""
	I0127 15:40:24.854684 1076050 logs.go:282] 0 containers: []
	W0127 15:40:24.854697 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:24.854705 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:24.854773 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:24.915668 1076050 cri.go:89] found id: ""
	I0127 15:40:24.915705 1076050 logs.go:282] 0 containers: []
	W0127 15:40:24.915718 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:24.915728 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:24.915794 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:24.975570 1076050 cri.go:89] found id: ""
	I0127 15:40:24.975610 1076050 logs.go:282] 0 containers: []
	W0127 15:40:24.975623 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:24.975632 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:24.975704 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:25.025853 1076050 cri.go:89] found id: ""
	I0127 15:40:25.025885 1076050 logs.go:282] 0 containers: []
	W0127 15:40:25.025896 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:25.025903 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:25.025980 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:25.064940 1076050 cri.go:89] found id: ""
	I0127 15:40:25.064976 1076050 logs.go:282] 0 containers: []
	W0127 15:40:25.064987 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:25.064996 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:25.065082 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:25.110507 1076050 cri.go:89] found id: ""
	I0127 15:40:25.110539 1076050 logs.go:282] 0 containers: []
	W0127 15:40:25.110549 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:25.110558 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:25.110622 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:25.150241 1076050 cri.go:89] found id: ""
	I0127 15:40:25.150288 1076050 logs.go:282] 0 containers: []
	W0127 15:40:25.150299 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:25.150313 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:25.150330 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:25.243205 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:25.243238 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:25.243255 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:25.323856 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:25.323900 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:25.367207 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:25.367245 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:25.429072 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:25.429120 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:27.945904 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:27.959618 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:27.959708 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:27.999655 1076050 cri.go:89] found id: ""
	I0127 15:40:27.999685 1076050 logs.go:282] 0 containers: []
	W0127 15:40:27.999697 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:27.999705 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:27.999768 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:28.039662 1076050 cri.go:89] found id: ""
	I0127 15:40:28.039695 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.039708 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:28.039716 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:28.039786 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:28.075418 1076050 cri.go:89] found id: ""
	I0127 15:40:28.075451 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.075462 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:28.075472 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:28.075542 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:28.114964 1076050 cri.go:89] found id: ""
	I0127 15:40:28.115023 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.115036 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:28.115045 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:28.115106 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:28.153086 1076050 cri.go:89] found id: ""
	I0127 15:40:28.153115 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.153126 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:28.153135 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:28.153198 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:28.189564 1076050 cri.go:89] found id: ""
	I0127 15:40:28.189597 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.189607 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:28.189623 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:28.189680 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:28.228037 1076050 cri.go:89] found id: ""
	I0127 15:40:28.228067 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.228076 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:28.228083 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:28.228163 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:28.277124 1076050 cri.go:89] found id: ""
	I0127 15:40:28.277155 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.277168 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:28.277179 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:28.277192 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:28.340183 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:28.340231 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:28.356822 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:28.356854 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:28.428923 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:28.428951 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:28.428968 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:28.505128 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:28.505170 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:31.047029 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:31.060582 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:31.060685 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:31.097127 1076050 cri.go:89] found id: ""
	I0127 15:40:31.097150 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.097160 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:31.097168 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:31.097230 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:31.134764 1076050 cri.go:89] found id: ""
	I0127 15:40:31.134799 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.134810 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:31.134818 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:31.134900 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:31.174779 1076050 cri.go:89] found id: ""
	I0127 15:40:31.174807 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.174816 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:31.174822 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:31.174875 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:31.215471 1076050 cri.go:89] found id: ""
	I0127 15:40:31.215503 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.215513 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:31.215519 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:31.215572 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:31.253765 1076050 cri.go:89] found id: ""
	I0127 15:40:31.253796 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.253804 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:31.253811 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:31.253867 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:31.297130 1076050 cri.go:89] found id: ""
	I0127 15:40:31.297161 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.297170 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:31.297176 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:31.297240 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:31.335280 1076050 cri.go:89] found id: ""
	I0127 15:40:31.335315 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.335326 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:31.335334 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:31.335406 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:31.372619 1076050 cri.go:89] found id: ""
	I0127 15:40:31.372652 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.372664 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:31.372678 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:31.372693 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:31.427666 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:31.427709 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:31.442810 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:31.442842 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:31.511297 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:31.511330 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:31.511354 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:31.595122 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:31.595168 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:34.138287 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:34.156651 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:34.156734 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:34.194604 1076050 cri.go:89] found id: ""
	I0127 15:40:34.194647 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.194658 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:34.194666 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:34.194729 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:34.233299 1076050 cri.go:89] found id: ""
	I0127 15:40:34.233353 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.233363 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:34.233369 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:34.233423 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:34.274424 1076050 cri.go:89] found id: ""
	I0127 15:40:34.274453 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.274465 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:34.274473 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:34.274539 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:34.317113 1076050 cri.go:89] found id: ""
	I0127 15:40:34.317144 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.317155 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:34.317168 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:34.317239 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:34.359212 1076050 cri.go:89] found id: ""
	I0127 15:40:34.359242 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.359252 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:34.359261 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:34.359328 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:34.398773 1076050 cri.go:89] found id: ""
	I0127 15:40:34.398805 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.398824 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:34.398833 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:34.398910 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:34.440053 1076050 cri.go:89] found id: ""
	I0127 15:40:34.440087 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.440099 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:34.440107 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:34.440178 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:34.482908 1076050 cri.go:89] found id: ""
	I0127 15:40:34.482943 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.482959 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:34.482973 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:34.482992 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:34.500178 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:34.500206 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:34.580251 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:34.580279 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:34.580302 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:34.673730 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:34.673772 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:34.720797 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:34.720838 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:37.282487 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:37.300162 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:37.300231 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:37.348753 1076050 cri.go:89] found id: ""
	I0127 15:40:37.348786 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.348798 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:37.348806 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:37.348870 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:37.398630 1076050 cri.go:89] found id: ""
	I0127 15:40:37.398669 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.398681 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:37.398689 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:37.398761 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:37.437030 1076050 cri.go:89] found id: ""
	I0127 15:40:37.437127 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.437155 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:37.437188 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:37.437277 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:37.477745 1076050 cri.go:89] found id: ""
	I0127 15:40:37.477837 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.477855 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:37.477864 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:37.477937 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:37.514259 1076050 cri.go:89] found id: ""
	I0127 15:40:37.514292 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.514302 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:37.514311 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:37.514385 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:37.551313 1076050 cri.go:89] found id: ""
	I0127 15:40:37.551349 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.551359 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:37.551367 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:37.551427 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:37.593740 1076050 cri.go:89] found id: ""
	I0127 15:40:37.593772 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.593783 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:37.593791 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:37.593854 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:37.634133 1076050 cri.go:89] found id: ""
	I0127 15:40:37.634169 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.634181 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:37.634194 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:37.634217 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:37.699046 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:37.699092 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:37.717470 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:37.717512 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:37.791051 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:37.791077 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:37.791106 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:37.882694 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:37.882742 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:40.431585 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:40.449664 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:40.449766 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:40.500904 1076050 cri.go:89] found id: ""
	I0127 15:40:40.500995 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.501020 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:40.501029 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:40.501103 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:40.543907 1076050 cri.go:89] found id: ""
	I0127 15:40:40.543939 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.543950 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:40.543958 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:40.544018 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:40.592294 1076050 cri.go:89] found id: ""
	I0127 15:40:40.592328 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.592339 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:40.592352 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:40.592418 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:40.641396 1076050 cri.go:89] found id: ""
	I0127 15:40:40.641429 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.641439 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:40.641449 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:40.641522 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:40.687151 1076050 cri.go:89] found id: ""
	I0127 15:40:40.687185 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.687197 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:40.687206 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:40.687279 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:40.728537 1076050 cri.go:89] found id: ""
	I0127 15:40:40.728573 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.728584 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:40.728593 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:40.728666 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:40.770995 1076050 cri.go:89] found id: ""
	I0127 15:40:40.771022 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.771035 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:40.771042 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:40.771108 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:40.818299 1076050 cri.go:89] found id: ""
	I0127 15:40:40.818332 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.818344 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:40.818357 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:40.818379 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:40.835538 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:40.835566 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:40.912785 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:40.912812 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:40.912829 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:41.029124 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:41.029177 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:41.088618 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:41.088649 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:43.646818 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:43.660154 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:43.660237 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:43.698517 1076050 cri.go:89] found id: ""
	I0127 15:40:43.698548 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.698557 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:43.698563 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:43.698624 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:43.736919 1076050 cri.go:89] found id: ""
	I0127 15:40:43.736954 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.736967 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:43.736978 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:43.737064 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:43.777333 1076050 cri.go:89] found id: ""
	I0127 15:40:43.777369 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.777382 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:43.777391 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:43.777462 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:43.817427 1076050 cri.go:89] found id: ""
	I0127 15:40:43.817460 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.817471 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:43.817480 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:43.817546 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:43.866498 1076050 cri.go:89] found id: ""
	I0127 15:40:43.866527 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.866538 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:43.866546 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:43.866616 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:43.919477 1076050 cri.go:89] found id: ""
	I0127 15:40:43.919510 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.919521 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:43.919530 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:43.919593 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:43.958203 1076050 cri.go:89] found id: ""
	I0127 15:40:43.958242 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.958261 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:43.958270 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:43.958340 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:43.996729 1076050 cri.go:89] found id: ""
	I0127 15:40:43.996760 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.996769 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:43.996779 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:43.996792 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:44.051707 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:44.051748 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:44.069643 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:44.069674 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:44.146464 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:44.146489 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:44.146505 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:44.230654 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:44.230696 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:46.788290 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:46.807855 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:46.807942 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:46.861569 1076050 cri.go:89] found id: ""
	I0127 15:40:46.861596 1076050 logs.go:282] 0 containers: []
	W0127 15:40:46.861608 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:46.861615 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:46.861684 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:46.919686 1076050 cri.go:89] found id: ""
	I0127 15:40:46.919719 1076050 logs.go:282] 0 containers: []
	W0127 15:40:46.919732 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:46.919741 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:46.919810 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:46.959359 1076050 cri.go:89] found id: ""
	I0127 15:40:46.959419 1076050 logs.go:282] 0 containers: []
	W0127 15:40:46.959432 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:46.959440 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:46.959503 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:47.000445 1076050 cri.go:89] found id: ""
	I0127 15:40:47.000489 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.000503 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:47.000512 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:47.000583 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:47.041395 1076050 cri.go:89] found id: ""
	I0127 15:40:47.041426 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.041440 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:47.041449 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:47.041512 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:47.086753 1076050 cri.go:89] found id: ""
	I0127 15:40:47.086787 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.086800 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:47.086808 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:47.086883 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:47.128760 1076050 cri.go:89] found id: ""
	I0127 15:40:47.128788 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.128799 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:47.128807 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:47.128876 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:47.173743 1076050 cri.go:89] found id: ""
	I0127 15:40:47.173779 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.173791 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:47.173804 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:47.173818 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:47.280755 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:47.280817 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:47.343245 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:47.343291 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:47.425229 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:47.425282 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:47.446605 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:47.446649 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:47.563807 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:50.064460 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:50.080142 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:50.080219 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:50.120604 1076050 cri.go:89] found id: ""
	I0127 15:40:50.120643 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.120655 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:50.120661 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:50.120716 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:50.161728 1076050 cri.go:89] found id: ""
	I0127 15:40:50.161766 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.161777 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:50.161785 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:50.161851 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:50.199247 1076050 cri.go:89] found id: ""
	I0127 15:40:50.199275 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.199286 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:50.199293 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:50.199369 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:50.246623 1076050 cri.go:89] found id: ""
	I0127 15:40:50.246652 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.246663 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:50.246672 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:50.246742 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:50.284077 1076050 cri.go:89] found id: ""
	I0127 15:40:50.284111 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.284123 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:50.284132 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:50.284200 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:50.326481 1076050 cri.go:89] found id: ""
	I0127 15:40:50.326518 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.326530 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:50.326539 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:50.326597 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:50.364165 1076050 cri.go:89] found id: ""
	I0127 15:40:50.364198 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.364210 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:50.364218 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:50.364280 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:50.402527 1076050 cri.go:89] found id: ""
	I0127 15:40:50.402560 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.402572 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:50.402586 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:50.402602 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:50.485370 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:50.485412 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:50.539508 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:50.539547 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:50.591618 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:50.591656 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:50.609824 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:50.609873 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:50.694094 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:53.194813 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:53.211192 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:53.211271 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:53.258010 1076050 cri.go:89] found id: ""
	I0127 15:40:53.258042 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.258060 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:53.258069 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:53.258138 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:53.297402 1076050 cri.go:89] found id: ""
	I0127 15:40:53.297430 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.297440 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:53.297448 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:53.297511 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:53.336412 1076050 cri.go:89] found id: ""
	I0127 15:40:53.336440 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.336450 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:53.336457 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:53.336526 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:53.383904 1076050 cri.go:89] found id: ""
	I0127 15:40:53.383939 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.383950 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:53.383959 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:53.384031 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:53.435476 1076050 cri.go:89] found id: ""
	I0127 15:40:53.435512 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.435525 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:53.435533 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:53.435604 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:53.477359 1076050 cri.go:89] found id: ""
	I0127 15:40:53.477389 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.477400 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:53.477408 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:53.477473 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:53.522739 1076050 cri.go:89] found id: ""
	I0127 15:40:53.522777 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.522789 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:53.522798 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:53.522870 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:53.591524 1076050 cri.go:89] found id: ""
	I0127 15:40:53.591556 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.591568 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:53.591581 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:53.591601 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:53.645459 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:53.645495 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:53.662522 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:53.662551 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:53.743915 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:53.743940 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:53.743957 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:53.844477 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:53.844511 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:56.390836 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:56.404803 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:56.404892 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:56.448556 1076050 cri.go:89] found id: ""
	I0127 15:40:56.448586 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.448597 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:56.448606 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:56.448674 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:56.501798 1076050 cri.go:89] found id: ""
	I0127 15:40:56.501833 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.501854 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:56.501863 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:56.501932 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:56.549831 1076050 cri.go:89] found id: ""
	I0127 15:40:56.549882 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.549895 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:56.549904 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:56.549976 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:56.604199 1076050 cri.go:89] found id: ""
	I0127 15:40:56.604236 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.604248 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:56.604258 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:56.604361 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:56.662492 1076050 cri.go:89] found id: ""
	I0127 15:40:56.662529 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.662540 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:56.662550 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:56.662621 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:56.712694 1076050 cri.go:89] found id: ""
	I0127 15:40:56.712731 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.712743 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:56.712752 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:56.712821 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:56.759321 1076050 cri.go:89] found id: ""
	I0127 15:40:56.759355 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.759366 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:56.759375 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:56.759441 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:56.806457 1076050 cri.go:89] found id: ""
	I0127 15:40:56.806487 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.806499 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:56.806511 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:56.806528 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:56.885361 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:56.885416 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:56.904333 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:56.904390 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:57.003794 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:57.003820 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:57.003845 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:57.107181 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:57.107240 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:59.656976 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:59.675626 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:59.675762 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:59.719313 1076050 cri.go:89] found id: ""
	I0127 15:40:59.719343 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.719351 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:59.719357 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:59.719441 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:59.758380 1076050 cri.go:89] found id: ""
	I0127 15:40:59.758419 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.758433 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:59.758441 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:59.758511 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:59.802754 1076050 cri.go:89] found id: ""
	I0127 15:40:59.802787 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.802798 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:59.802806 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:59.802874 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:59.847665 1076050 cri.go:89] found id: ""
	I0127 15:40:59.847695 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.847707 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:59.847716 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:59.847781 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:59.888840 1076050 cri.go:89] found id: ""
	I0127 15:40:59.888867 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.888875 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:59.888882 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:59.888946 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:59.935416 1076050 cri.go:89] found id: ""
	I0127 15:40:59.935448 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.935460 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:59.935468 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:59.935544 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:59.982418 1076050 cri.go:89] found id: ""
	I0127 15:40:59.982448 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.982456 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:59.982464 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:59.982539 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:00.024752 1076050 cri.go:89] found id: ""
	I0127 15:41:00.024794 1076050 logs.go:282] 0 containers: []
	W0127 15:41:00.024806 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:00.024820 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:00.024839 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:00.044330 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:00.044369 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:00.130115 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:00.130216 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:00.130241 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:00.236534 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:00.236585 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:00.312265 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:00.312307 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:02.873155 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:02.889623 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:02.889689 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:02.931491 1076050 cri.go:89] found id: ""
	I0127 15:41:02.931528 1076050 logs.go:282] 0 containers: []
	W0127 15:41:02.931537 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:02.931546 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:02.931615 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:02.968872 1076050 cri.go:89] found id: ""
	I0127 15:41:02.968912 1076050 logs.go:282] 0 containers: []
	W0127 15:41:02.968924 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:02.968932 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:02.969030 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:03.004397 1076050 cri.go:89] found id: ""
	I0127 15:41:03.004428 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.004437 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:03.004443 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:03.004498 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:03.042909 1076050 cri.go:89] found id: ""
	I0127 15:41:03.042937 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.042948 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:03.042955 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:03.043020 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:03.081525 1076050 cri.go:89] found id: ""
	I0127 15:41:03.081556 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.081567 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:03.081576 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:03.081645 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:03.122741 1076050 cri.go:89] found id: ""
	I0127 15:41:03.122773 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.122784 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:03.122793 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:03.122855 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:03.159043 1076050 cri.go:89] found id: ""
	I0127 15:41:03.159069 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.159077 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:03.159090 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:03.159140 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:03.200367 1076050 cri.go:89] found id: ""
	I0127 15:41:03.200402 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.200414 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:03.200429 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:03.200447 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:03.291239 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:03.291291 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:03.336057 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:03.336098 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:03.395428 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:03.395480 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:03.411878 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:03.411911 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:03.498183 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:06.000178 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:06.024915 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:06.024973 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:06.098332 1076050 cri.go:89] found id: ""
	I0127 15:41:06.098361 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.098369 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:06.098375 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:06.098430 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:06.156082 1076050 cri.go:89] found id: ""
	I0127 15:41:06.156117 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.156129 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:06.156137 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:06.156203 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:06.217204 1076050 cri.go:89] found id: ""
	I0127 15:41:06.217235 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.217246 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:06.217255 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:06.217331 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:06.259003 1076050 cri.go:89] found id: ""
	I0127 15:41:06.259029 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.259041 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:06.259048 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:06.259123 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:06.298292 1076050 cri.go:89] found id: ""
	I0127 15:41:06.298330 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.298341 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:06.298349 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:06.298416 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:06.339173 1076050 cri.go:89] found id: ""
	I0127 15:41:06.339211 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.339224 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:06.339234 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:06.339309 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:06.381271 1076050 cri.go:89] found id: ""
	I0127 15:41:06.381300 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.381311 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:06.381320 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:06.381385 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:06.429073 1076050 cri.go:89] found id: ""
	I0127 15:41:06.429134 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.429149 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:06.429164 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:06.429187 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:06.491509 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:06.491545 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:06.507964 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:06.508011 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:06.589122 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:06.589158 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:06.589173 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:06.668992 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:06.669051 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:09.224594 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:09.239525 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:09.239616 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:09.285116 1076050 cri.go:89] found id: ""
	I0127 15:41:09.285160 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.285172 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:09.285182 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:09.285252 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:09.342278 1076050 cri.go:89] found id: ""
	I0127 15:41:09.342307 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.342323 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:09.342332 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:09.342397 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:09.385479 1076050 cri.go:89] found id: ""
	I0127 15:41:09.385506 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.385515 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:09.385521 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:09.385580 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:09.426386 1076050 cri.go:89] found id: ""
	I0127 15:41:09.426426 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.426439 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:09.426448 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:09.426516 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:09.468739 1076050 cri.go:89] found id: ""
	I0127 15:41:09.468776 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.468789 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:09.468798 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:09.468866 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:09.510885 1076050 cri.go:89] found id: ""
	I0127 15:41:09.510918 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.510931 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:09.510939 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:09.511007 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:09.548406 1076050 cri.go:89] found id: ""
	I0127 15:41:09.548442 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.548455 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:09.548464 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:09.548547 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:09.589727 1076050 cri.go:89] found id: ""
	I0127 15:41:09.589761 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.589773 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:09.589786 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:09.589802 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:09.641717 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:09.641759 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:09.712152 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:09.712220 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:09.730069 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:09.730119 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:09.808412 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:09.808447 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:09.808462 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:12.421654 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:12.440156 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:12.440298 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:12.489759 1076050 cri.go:89] found id: ""
	I0127 15:41:12.489788 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.489800 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:12.489809 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:12.489887 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:12.540068 1076050 cri.go:89] found id: ""
	I0127 15:41:12.540099 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.540108 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:12.540114 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:12.540178 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:12.587471 1076050 cri.go:89] found id: ""
	I0127 15:41:12.587497 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.587505 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:12.587511 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:12.587578 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:12.638634 1076050 cri.go:89] found id: ""
	I0127 15:41:12.638668 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.638680 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:12.638689 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:12.638762 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:12.683784 1076050 cri.go:89] found id: ""
	I0127 15:41:12.683815 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.683826 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:12.683837 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:12.683900 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:12.720438 1076050 cri.go:89] found id: ""
	I0127 15:41:12.720479 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.720488 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:12.720495 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:12.720548 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:12.759175 1076050 cri.go:89] found id: ""
	I0127 15:41:12.759207 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.759219 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:12.759226 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:12.759290 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:12.792624 1076050 cri.go:89] found id: ""
	I0127 15:41:12.792656 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.792668 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:12.792681 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:12.792697 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:12.878341 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:12.878386 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:12.926986 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:12.927028 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:12.982133 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:12.982172 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:12.999460 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:12.999503 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:13.087892 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:15.589166 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:15.607749 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:15.607824 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:15.655722 1076050 cri.go:89] found id: ""
	I0127 15:41:15.655752 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.655764 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:15.655773 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:15.655847 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:15.703202 1076050 cri.go:89] found id: ""
	I0127 15:41:15.703235 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.703248 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:15.703256 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:15.703360 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:15.747335 1076050 cri.go:89] found id: ""
	I0127 15:41:15.747371 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.747383 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:15.747400 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:15.747470 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:15.786207 1076050 cri.go:89] found id: ""
	I0127 15:41:15.786245 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.786259 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:15.786269 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:15.786351 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:15.826251 1076050 cri.go:89] found id: ""
	I0127 15:41:15.826286 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.826298 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:15.826306 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:15.826435 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:15.873134 1076050 cri.go:89] found id: ""
	I0127 15:41:15.873167 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.873187 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:15.873195 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:15.873267 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:15.923221 1076050 cri.go:89] found id: ""
	I0127 15:41:15.923273 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.923286 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:15.923294 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:15.923364 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:15.967245 1076050 cri.go:89] found id: ""
	I0127 15:41:15.967282 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.967295 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:15.967309 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:15.967325 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:16.057675 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:16.057706 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:16.057722 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:16.141133 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:16.141181 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:16.186832 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:16.186869 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:16.255430 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:16.255473 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:18.774206 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:18.792191 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:18.792258 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:18.835636 1076050 cri.go:89] found id: ""
	I0127 15:41:18.835674 1076050 logs.go:282] 0 containers: []
	W0127 15:41:18.835685 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:18.835693 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:18.835763 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:18.875370 1076050 cri.go:89] found id: ""
	I0127 15:41:18.875423 1076050 logs.go:282] 0 containers: []
	W0127 15:41:18.875435 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:18.875444 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:18.875517 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:18.915439 1076050 cri.go:89] found id: ""
	I0127 15:41:18.915469 1076050 logs.go:282] 0 containers: []
	W0127 15:41:18.915480 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:18.915489 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:18.915554 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:18.962331 1076050 cri.go:89] found id: ""
	I0127 15:41:18.962359 1076050 logs.go:282] 0 containers: []
	W0127 15:41:18.962366 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:18.962372 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:18.962425 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:19.017809 1076050 cri.go:89] found id: ""
	I0127 15:41:19.017839 1076050 logs.go:282] 0 containers: []
	W0127 15:41:19.017849 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:19.017857 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:19.017924 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:19.066418 1076050 cri.go:89] found id: ""
	I0127 15:41:19.066454 1076050 logs.go:282] 0 containers: []
	W0127 15:41:19.066463 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:19.066469 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:19.066540 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:19.107181 1076050 cri.go:89] found id: ""
	I0127 15:41:19.107212 1076050 logs.go:282] 0 containers: []
	W0127 15:41:19.107221 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:19.107227 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:19.107286 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:19.148999 1076050 cri.go:89] found id: ""
	I0127 15:41:19.149043 1076050 logs.go:282] 0 containers: []
	W0127 15:41:19.149055 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:19.149070 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:19.149093 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:19.235472 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:19.235514 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:19.290762 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:19.290794 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:19.349155 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:19.349201 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:19.365924 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:19.365957 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:19.455480 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:21.957147 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:21.971580 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:21.971732 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:22.011493 1076050 cri.go:89] found id: ""
	I0127 15:41:22.011523 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.011531 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:22.011537 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:22.011600 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:22.047592 1076050 cri.go:89] found id: ""
	I0127 15:41:22.047615 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.047623 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:22.047635 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:22.047704 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:22.084231 1076050 cri.go:89] found id: ""
	I0127 15:41:22.084258 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.084266 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:22.084272 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:22.084331 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:22.126843 1076050 cri.go:89] found id: ""
	I0127 15:41:22.126870 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.126881 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:22.126890 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:22.126952 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:22.167538 1076050 cri.go:89] found id: ""
	I0127 15:41:22.167563 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.167572 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:22.167579 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:22.167633 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:22.206138 1076050 cri.go:89] found id: ""
	I0127 15:41:22.206169 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.206180 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:22.206193 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:22.206259 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:22.245152 1076050 cri.go:89] found id: ""
	I0127 15:41:22.245186 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.245199 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:22.245207 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:22.245273 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:22.280780 1076050 cri.go:89] found id: ""
	I0127 15:41:22.280820 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.280831 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:22.280844 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:22.280859 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:22.333940 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:22.333975 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:22.348880 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:22.348910 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:22.421581 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:22.421610 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:22.421625 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:22.502157 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:22.502199 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:25.045123 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:25.058997 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:25.059058 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:25.094852 1076050 cri.go:89] found id: ""
	I0127 15:41:25.094881 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.094888 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:25.094896 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:25.094955 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:25.136390 1076050 cri.go:89] found id: ""
	I0127 15:41:25.136414 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.136424 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:25.136432 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:25.136491 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:25.173187 1076050 cri.go:89] found id: ""
	I0127 15:41:25.173213 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.173221 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:25.173226 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:25.173284 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:25.210946 1076050 cri.go:89] found id: ""
	I0127 15:41:25.210977 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.210990 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:25.210999 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:25.211082 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:25.251607 1076050 cri.go:89] found id: ""
	I0127 15:41:25.251633 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.251643 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:25.251649 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:25.251702 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:25.286803 1076050 cri.go:89] found id: ""
	I0127 15:41:25.286831 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.286842 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:25.286849 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:25.286914 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:25.322818 1076050 cri.go:89] found id: ""
	I0127 15:41:25.322846 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.322857 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:25.322866 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:25.322936 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:25.361082 1076050 cri.go:89] found id: ""
	I0127 15:41:25.361110 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.361120 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:25.361130 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:25.361142 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:25.412378 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:25.412416 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:25.427170 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:25.427206 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:25.498342 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:25.498377 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:25.498393 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:25.589099 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:25.589152 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:28.130224 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:28.145326 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:28.145389 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:28.186258 1076050 cri.go:89] found id: ""
	I0127 15:41:28.186293 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.186316 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:28.186326 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:28.186408 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:28.224332 1076050 cri.go:89] found id: ""
	I0127 15:41:28.224370 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.224382 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:28.224393 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:28.224462 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:28.262236 1076050 cri.go:89] found id: ""
	I0127 15:41:28.262267 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.262274 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:28.262282 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:28.262334 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:28.299248 1076050 cri.go:89] found id: ""
	I0127 15:41:28.299281 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.299290 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:28.299300 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:28.299358 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:28.340255 1076050 cri.go:89] found id: ""
	I0127 15:41:28.340289 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.340301 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:28.340326 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:28.340396 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:28.384857 1076050 cri.go:89] found id: ""
	I0127 15:41:28.384891 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.384903 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:28.384912 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:28.384983 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:28.428121 1076050 cri.go:89] found id: ""
	I0127 15:41:28.428158 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.428169 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:28.428179 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:28.428248 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:28.473305 1076050 cri.go:89] found id: ""
	I0127 15:41:28.473332 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.473340 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:28.473350 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:28.473368 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:28.571238 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:28.571271 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:28.571316 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:28.651696 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:28.651731 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:28.692842 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:28.692870 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:28.748091 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:28.748133 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:31.262275 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:31.278085 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:31.278174 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:31.313339 1076050 cri.go:89] found id: ""
	I0127 15:41:31.313366 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.313375 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:31.313381 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:31.313450 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:31.351690 1076050 cri.go:89] found id: ""
	I0127 15:41:31.351716 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.351726 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:31.351732 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:31.351797 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:31.387516 1076050 cri.go:89] found id: ""
	I0127 15:41:31.387547 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.387556 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:31.387562 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:31.387617 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:31.422030 1076050 cri.go:89] found id: ""
	I0127 15:41:31.422062 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.422070 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:31.422076 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:31.422134 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:31.458563 1076050 cri.go:89] found id: ""
	I0127 15:41:31.458592 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.458604 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:31.458612 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:31.458679 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:31.496029 1076050 cri.go:89] found id: ""
	I0127 15:41:31.496064 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.496075 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:31.496090 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:31.496156 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:31.543782 1076050 cri.go:89] found id: ""
	I0127 15:41:31.543808 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.543816 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:31.543822 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:31.543874 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:31.581950 1076050 cri.go:89] found id: ""
	I0127 15:41:31.581987 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.582001 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:31.582014 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:31.582032 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:31.653329 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:31.653358 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:31.653374 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:31.736286 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:31.736323 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:31.782977 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:31.783009 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:31.842741 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:31.842773 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:34.357158 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:34.370137 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:34.370204 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:34.414297 1076050 cri.go:89] found id: ""
	I0127 15:41:34.414334 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.414347 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:34.414356 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:34.414437 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:34.450717 1076050 cri.go:89] found id: ""
	I0127 15:41:34.450749 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.450759 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:34.450767 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:34.450832 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:34.490881 1076050 cri.go:89] found id: ""
	I0127 15:41:34.490915 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.490928 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:34.490937 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:34.491012 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:34.526240 1076050 cri.go:89] found id: ""
	I0127 15:41:34.526277 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.526289 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:34.526297 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:34.526365 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:34.562664 1076050 cri.go:89] found id: ""
	I0127 15:41:34.562700 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.562712 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:34.562721 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:34.562788 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:34.600382 1076050 cri.go:89] found id: ""
	I0127 15:41:34.600411 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.600422 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:34.600430 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:34.600496 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:34.636399 1076050 cri.go:89] found id: ""
	I0127 15:41:34.636431 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.636443 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:34.636451 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:34.636518 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:34.676900 1076050 cri.go:89] found id: ""
	I0127 15:41:34.676935 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.676948 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:34.676961 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:34.676975 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:34.730519 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:34.730555 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:34.746159 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:34.746188 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:34.823410 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:34.823447 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:34.823468 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:34.907572 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:34.907611 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:37.485412 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:37.499659 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:37.499761 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:37.536578 1076050 cri.go:89] found id: ""
	I0127 15:41:37.536608 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.536618 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:37.536627 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:37.536703 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:37.573737 1076050 cri.go:89] found id: ""
	I0127 15:41:37.573773 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.573783 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:37.573790 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:37.573861 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:37.611200 1076050 cri.go:89] found id: ""
	I0127 15:41:37.611232 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.611241 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:37.611248 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:37.611302 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:37.646784 1076050 cri.go:89] found id: ""
	I0127 15:41:37.646812 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.646823 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:37.646832 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:37.646900 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:37.684664 1076050 cri.go:89] found id: ""
	I0127 15:41:37.684694 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.684706 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:37.684714 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:37.684777 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:37.721812 1076050 cri.go:89] found id: ""
	I0127 15:41:37.721850 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.721863 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:37.721874 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:37.721944 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:37.759256 1076050 cri.go:89] found id: ""
	I0127 15:41:37.759279 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.759287 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:37.759293 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:37.759345 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:37.798971 1076050 cri.go:89] found id: ""
	I0127 15:41:37.799004 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.799017 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:37.799030 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:37.799041 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:37.855679 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:37.855719 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:37.869799 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:37.869833 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:37.943918 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:37.943944 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:37.943956 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:38.035563 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:38.035611 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:40.581178 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:40.597341 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:40.597409 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:40.634799 1076050 cri.go:89] found id: ""
	I0127 15:41:40.634827 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.634836 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:40.634843 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:40.634910 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:40.684392 1076050 cri.go:89] found id: ""
	I0127 15:41:40.684421 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.684429 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:40.684437 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:40.684504 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:40.729085 1076050 cri.go:89] found id: ""
	I0127 15:41:40.729120 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.729131 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:40.729139 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:40.729212 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:40.778437 1076050 cri.go:89] found id: ""
	I0127 15:41:40.778469 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.778482 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:40.778489 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:40.778556 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:40.820889 1076050 cri.go:89] found id: ""
	I0127 15:41:40.820914 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.820922 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:40.820928 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:40.820992 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:40.858256 1076050 cri.go:89] found id: ""
	I0127 15:41:40.858284 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.858296 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:40.858304 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:40.858374 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:40.897931 1076050 cri.go:89] found id: ""
	I0127 15:41:40.897957 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.897966 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:40.897972 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:40.898026 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:40.937068 1076050 cri.go:89] found id: ""
	I0127 15:41:40.937100 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.937111 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:40.937124 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:40.937138 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:41.012844 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:41.012867 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:41.012880 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:41.093680 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:41.093722 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:41.136964 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:41.136996 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:41.190396 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:41.190435 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:43.708328 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:43.722838 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:43.722928 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:43.762360 1076050 cri.go:89] found id: ""
	I0127 15:41:43.762395 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.762407 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:43.762416 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:43.762483 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:43.802226 1076050 cri.go:89] found id: ""
	I0127 15:41:43.802266 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.802279 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:43.802287 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:43.802363 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:43.848037 1076050 cri.go:89] found id: ""
	I0127 15:41:43.848067 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.848081 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:43.848100 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:43.848167 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:43.891393 1076050 cri.go:89] found id: ""
	I0127 15:41:43.891491 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.891506 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:43.891516 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:43.891585 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:43.936352 1076050 cri.go:89] found id: ""
	I0127 15:41:43.936447 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.936467 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:43.936481 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:43.936632 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:43.980165 1076050 cri.go:89] found id: ""
	I0127 15:41:43.980192 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.980200 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:43.980206 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:43.980264 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:44.019889 1076050 cri.go:89] found id: ""
	I0127 15:41:44.019925 1076050 logs.go:282] 0 containers: []
	W0127 15:41:44.019938 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:44.019946 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:44.020005 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:44.057363 1076050 cri.go:89] found id: ""
	I0127 15:41:44.057400 1076050 logs.go:282] 0 containers: []
	W0127 15:41:44.057412 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:44.057426 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:44.057442 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:44.072218 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:44.072249 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:44.148918 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:44.148944 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:44.148960 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:44.231300 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:44.231347 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:44.273468 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:44.273507 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:46.833142 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:46.848106 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:46.848174 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:46.886223 1076050 cri.go:89] found id: ""
	I0127 15:41:46.886250 1076050 logs.go:282] 0 containers: []
	W0127 15:41:46.886258 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:46.886264 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:46.886315 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:46.923854 1076050 cri.go:89] found id: ""
	I0127 15:41:46.923883 1076050 logs.go:282] 0 containers: []
	W0127 15:41:46.923891 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:46.923903 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:46.923956 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:46.962084 1076050 cri.go:89] found id: ""
	I0127 15:41:46.962112 1076050 logs.go:282] 0 containers: []
	W0127 15:41:46.962120 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:46.962128 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:46.962189 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:46.998299 1076050 cri.go:89] found id: ""
	I0127 15:41:46.998329 1076050 logs.go:282] 0 containers: []
	W0127 15:41:46.998338 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:46.998344 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:46.998401 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:47.036481 1076050 cri.go:89] found id: ""
	I0127 15:41:47.036519 1076050 logs.go:282] 0 containers: []
	W0127 15:41:47.036531 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:47.036540 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:47.036606 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:47.072486 1076050 cri.go:89] found id: ""
	I0127 15:41:47.072522 1076050 logs.go:282] 0 containers: []
	W0127 15:41:47.072534 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:47.072543 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:47.072610 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:47.116871 1076050 cri.go:89] found id: ""
	I0127 15:41:47.116912 1076050 logs.go:282] 0 containers: []
	W0127 15:41:47.116937 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:47.116947 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:47.117049 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:47.157060 1076050 cri.go:89] found id: ""
	I0127 15:41:47.157092 1076050 logs.go:282] 0 containers: []
	W0127 15:41:47.157104 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:47.157118 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:47.157135 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:47.210998 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:47.211040 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:47.224898 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:47.224926 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:47.306490 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:47.306521 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:47.306540 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:47.394529 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:47.394582 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:49.942182 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:49.958258 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:49.958321 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:49.997962 1076050 cri.go:89] found id: ""
	I0127 15:41:49.997999 1076050 logs.go:282] 0 containers: []
	W0127 15:41:49.998019 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:49.998029 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:49.998091 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:50.042973 1076050 cri.go:89] found id: ""
	I0127 15:41:50.043007 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.043015 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:50.043021 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:50.043078 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:50.080466 1076050 cri.go:89] found id: ""
	I0127 15:41:50.080496 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.080506 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:50.080514 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:50.080581 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:50.122155 1076050 cri.go:89] found id: ""
	I0127 15:41:50.122187 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.122199 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:50.122208 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:50.122270 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:50.160215 1076050 cri.go:89] found id: ""
	I0127 15:41:50.160245 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.160254 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:50.160262 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:50.160315 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:50.200684 1076050 cri.go:89] found id: ""
	I0127 15:41:50.200710 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.200719 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:50.200724 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:50.200790 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:50.238625 1076050 cri.go:89] found id: ""
	I0127 15:41:50.238650 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.238658 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:50.238664 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:50.238721 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:50.276187 1076050 cri.go:89] found id: ""
	I0127 15:41:50.276217 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.276227 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:50.276238 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:50.276258 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:50.327617 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:50.327675 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:50.343530 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:50.343561 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:50.420740 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:50.420764 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:50.420776 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:50.506757 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:50.506809 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:53.057745 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:53.073259 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:53.073338 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:53.111798 1076050 cri.go:89] found id: ""
	I0127 15:41:53.111831 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.111839 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:53.111849 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:53.111921 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:53.151928 1076050 cri.go:89] found id: ""
	I0127 15:41:53.151959 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.151970 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:53.151978 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:53.152045 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:53.187310 1076050 cri.go:89] found id: ""
	I0127 15:41:53.187357 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.187369 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:53.187377 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:53.187443 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:53.230758 1076050 cri.go:89] found id: ""
	I0127 15:41:53.230786 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.230795 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:53.230800 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:53.230852 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:53.266244 1076050 cri.go:89] found id: ""
	I0127 15:41:53.266276 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.266285 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:53.266291 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:53.266356 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:53.302601 1076050 cri.go:89] found id: ""
	I0127 15:41:53.302628 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.302638 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:53.302647 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:53.302710 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:53.342505 1076050 cri.go:89] found id: ""
	I0127 15:41:53.342541 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.342551 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:53.342561 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:53.342643 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:53.379672 1076050 cri.go:89] found id: ""
	I0127 15:41:53.379706 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.379718 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:53.379730 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:53.379745 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:53.421809 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:53.421852 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:53.475330 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:53.475369 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:53.490625 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:53.490652 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:53.560602 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:53.560627 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:53.560637 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:56.148600 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:56.162485 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:56.162564 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:56.200397 1076050 cri.go:89] found id: ""
	I0127 15:41:56.200434 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.200447 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:56.200458 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:56.200523 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:56.236022 1076050 cri.go:89] found id: ""
	I0127 15:41:56.236067 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.236078 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:56.236086 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:56.236154 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:56.275920 1076050 cri.go:89] found id: ""
	I0127 15:41:56.275956 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.275966 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:56.275975 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:56.276046 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:56.312921 1076050 cri.go:89] found id: ""
	I0127 15:41:56.312953 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.312963 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:56.312971 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:56.313056 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:56.352348 1076050 cri.go:89] found id: ""
	I0127 15:41:56.352373 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.352381 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:56.352387 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:56.352440 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:56.398556 1076050 cri.go:89] found id: ""
	I0127 15:41:56.398591 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.398603 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:56.398617 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:56.398686 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:56.440032 1076050 cri.go:89] found id: ""
	I0127 15:41:56.440063 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.440071 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:56.440078 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:56.440137 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:56.476249 1076050 cri.go:89] found id: ""
	I0127 15:41:56.476280 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.476291 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:56.476305 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:56.476321 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:56.530965 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:56.531017 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:56.545838 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:56.545869 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:56.618187 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:56.618245 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:56.618257 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:56.701048 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:56.701087 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:59.248508 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:59.262851 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:59.262928 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:59.300917 1076050 cri.go:89] found id: ""
	I0127 15:41:59.300947 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.300959 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:59.300967 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:59.301062 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:59.345421 1076050 cri.go:89] found id: ""
	I0127 15:41:59.345452 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.345463 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:59.345471 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:59.345568 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:59.381990 1076050 cri.go:89] found id: ""
	I0127 15:41:59.382025 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.382037 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:59.382046 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:59.382115 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:59.420410 1076050 cri.go:89] found id: ""
	I0127 15:41:59.420456 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.420466 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:59.420472 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:59.420543 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:59.461365 1076050 cri.go:89] found id: ""
	I0127 15:41:59.461391 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.461403 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:59.461412 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:59.461480 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:59.497094 1076050 cri.go:89] found id: ""
	I0127 15:41:59.497122 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.497130 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:59.497136 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:59.497201 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:59.537636 1076050 cri.go:89] found id: ""
	I0127 15:41:59.537663 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.537672 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:59.537680 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:59.537780 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:59.572954 1076050 cri.go:89] found id: ""
	I0127 15:41:59.572984 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.572993 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:59.573023 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:59.573039 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:59.660416 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:59.660457 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:59.702396 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:59.702423 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:59.758534 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:59.758583 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:59.772463 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:59.772496 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:59.849599 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:02.350500 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:02.364408 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:02.364483 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:02.400537 1076050 cri.go:89] found id: ""
	I0127 15:42:02.400574 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.400588 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:02.400596 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:02.400664 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:02.442696 1076050 cri.go:89] found id: ""
	I0127 15:42:02.442731 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.442743 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:02.442751 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:02.442825 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:02.485485 1076050 cri.go:89] found id: ""
	I0127 15:42:02.485511 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.485522 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:02.485529 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:02.485595 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:02.524989 1076050 cri.go:89] found id: ""
	I0127 15:42:02.525036 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.525048 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:02.525057 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:02.525137 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:02.560538 1076050 cri.go:89] found id: ""
	I0127 15:42:02.560567 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.560578 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:02.560586 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:02.560649 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:02.602960 1076050 cri.go:89] found id: ""
	I0127 15:42:02.602996 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.603008 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:02.603017 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:02.603082 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:02.645389 1076050 cri.go:89] found id: ""
	I0127 15:42:02.645415 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.645425 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:02.645436 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:02.645502 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:02.689493 1076050 cri.go:89] found id: ""
	I0127 15:42:02.689526 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.689537 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:02.689549 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:02.689578 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:02.746806 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:02.746848 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:02.761212 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:02.761243 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:02.841116 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:02.841135 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:02.841147 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:02.932117 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:02.932159 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:05.477139 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:05.491255 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:05.491337 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:05.527520 1076050 cri.go:89] found id: ""
	I0127 15:42:05.527551 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.527563 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:05.527572 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:05.527639 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:05.569699 1076050 cri.go:89] found id: ""
	I0127 15:42:05.569731 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.569743 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:05.569752 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:05.569825 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:05.607615 1076050 cri.go:89] found id: ""
	I0127 15:42:05.607654 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.607667 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:05.607677 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:05.607750 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:05.644591 1076050 cri.go:89] found id: ""
	I0127 15:42:05.644622 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.644634 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:05.644642 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:05.644693 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:05.684235 1076050 cri.go:89] found id: ""
	I0127 15:42:05.684258 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.684265 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:05.684272 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:05.684327 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:05.722858 1076050 cri.go:89] found id: ""
	I0127 15:42:05.722902 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.722914 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:05.722924 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:05.722989 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:05.759028 1076050 cri.go:89] found id: ""
	I0127 15:42:05.759062 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.759074 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:05.759082 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:05.759203 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:05.799551 1076050 cri.go:89] found id: ""
	I0127 15:42:05.799580 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.799592 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:05.799608 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:05.799624 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:05.859709 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:05.859763 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:05.873857 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:05.873893 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:05.950048 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:05.950080 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:05.950097 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:06.027916 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:06.027961 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:08.576361 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:08.591092 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:08.591172 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:08.629233 1076050 cri.go:89] found id: ""
	I0127 15:42:08.629262 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.629271 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:08.629277 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:08.629330 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:08.664138 1076050 cri.go:89] found id: ""
	I0127 15:42:08.664172 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.664183 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:08.664192 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:08.664254 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:08.702076 1076050 cri.go:89] found id: ""
	I0127 15:42:08.702113 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.702124 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:08.702132 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:08.702195 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:08.738780 1076050 cri.go:89] found id: ""
	I0127 15:42:08.738813 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.738823 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:08.738831 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:08.738904 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:08.773890 1076050 cri.go:89] found id: ""
	I0127 15:42:08.773922 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.773930 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:08.773936 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:08.773987 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:08.808430 1076050 cri.go:89] found id: ""
	I0127 15:42:08.808465 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.808477 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:08.808485 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:08.808553 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:08.844590 1076050 cri.go:89] found id: ""
	I0127 15:42:08.844615 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.844626 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:08.844634 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:08.844701 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:08.888333 1076050 cri.go:89] found id: ""
	I0127 15:42:08.888368 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.888377 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:08.888388 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:08.888420 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:08.941417 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:08.941453 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:08.956868 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:08.956942 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:09.049362 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:09.049390 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:09.049406 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:09.129215 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:09.129255 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:11.675550 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:11.690737 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:11.690808 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:11.727524 1076050 cri.go:89] found id: ""
	I0127 15:42:11.727554 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.727564 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:11.727572 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:11.727635 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:11.764046 1076050 cri.go:89] found id: ""
	I0127 15:42:11.764073 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.764082 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:11.764089 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:11.764142 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:11.799530 1076050 cri.go:89] found id: ""
	I0127 15:42:11.799562 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.799574 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:11.799582 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:11.799647 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:11.839880 1076050 cri.go:89] found id: ""
	I0127 15:42:11.839912 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.839921 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:11.839927 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:11.839989 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:11.876263 1076050 cri.go:89] found id: ""
	I0127 15:42:11.876313 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.876324 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:11.876332 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:11.876403 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:11.919106 1076050 cri.go:89] found id: ""
	I0127 15:42:11.919136 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.919144 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:11.919150 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:11.919209 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:11.957253 1076050 cri.go:89] found id: ""
	I0127 15:42:11.957285 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.957296 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:11.957304 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:11.957369 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:11.993481 1076050 cri.go:89] found id: ""
	I0127 15:42:11.993515 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.993527 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:11.993544 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:11.993560 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:12.063236 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:12.063264 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:12.063285 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:12.149889 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:12.149932 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:12.195704 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:12.195730 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:12.254422 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:12.254457 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:14.768483 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:14.782452 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:14.782539 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:14.822523 1076050 cri.go:89] found id: ""
	I0127 15:42:14.822558 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.822570 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:14.822576 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:14.822654 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:14.861058 1076050 cri.go:89] found id: ""
	I0127 15:42:14.861085 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.861094 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:14.861099 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:14.861164 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:14.898147 1076050 cri.go:89] found id: ""
	I0127 15:42:14.898178 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.898189 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:14.898199 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:14.898265 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:14.936269 1076050 cri.go:89] found id: ""
	I0127 15:42:14.936299 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.936307 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:14.936313 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:14.936378 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:14.971287 1076050 cri.go:89] found id: ""
	I0127 15:42:14.971320 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.971332 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:14.971341 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:14.971394 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:15.011649 1076050 cri.go:89] found id: ""
	I0127 15:42:15.011679 1076050 logs.go:282] 0 containers: []
	W0127 15:42:15.011687 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:15.011693 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:15.011744 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:15.047290 1076050 cri.go:89] found id: ""
	I0127 15:42:15.047329 1076050 logs.go:282] 0 containers: []
	W0127 15:42:15.047340 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:15.047349 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:15.047413 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:15.089625 1076050 cri.go:89] found id: ""
	I0127 15:42:15.089655 1076050 logs.go:282] 0 containers: []
	W0127 15:42:15.089667 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:15.089680 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:15.089694 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:15.136374 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:15.136410 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:15.195628 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:15.195676 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:15.213575 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:15.213679 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:15.293664 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:15.293694 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:15.293707 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:17.882520 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:17.896333 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:17.896403 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:17.935049 1076050 cri.go:89] found id: ""
	I0127 15:42:17.935078 1076050 logs.go:282] 0 containers: []
	W0127 15:42:17.935088 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:17.935096 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:17.935158 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:17.972911 1076050 cri.go:89] found id: ""
	I0127 15:42:17.972946 1076050 logs.go:282] 0 containers: []
	W0127 15:42:17.972958 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:17.972967 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:17.973073 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:18.017249 1076050 cri.go:89] found id: ""
	I0127 15:42:18.017276 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.017286 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:18.017292 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:18.017353 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:18.059963 1076050 cri.go:89] found id: ""
	I0127 15:42:18.059995 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.060007 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:18.060016 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:18.060086 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:18.106174 1076050 cri.go:89] found id: ""
	I0127 15:42:18.106219 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.106232 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:18.106248 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:18.106318 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:18.146130 1076050 cri.go:89] found id: ""
	I0127 15:42:18.146161 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.146176 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:18.146184 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:18.146256 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:18.184143 1076050 cri.go:89] found id: ""
	I0127 15:42:18.184176 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.184185 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:18.184191 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:18.184246 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:18.225042 1076050 cri.go:89] found id: ""
	I0127 15:42:18.225084 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.225096 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:18.225110 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:18.225127 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:18.263543 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:18.263577 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:18.321274 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:18.321323 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:18.336830 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:18.336861 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:18.420928 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:18.420955 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:18.420971 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:21.014731 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:21.030978 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:21.031048 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:21.071340 1076050 cri.go:89] found id: ""
	I0127 15:42:21.071370 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.071378 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:21.071385 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:21.071442 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:21.107955 1076050 cri.go:89] found id: ""
	I0127 15:42:21.107987 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.107999 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:21.108006 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:21.108073 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:21.148426 1076050 cri.go:89] found id: ""
	I0127 15:42:21.148465 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.148477 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:21.148488 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:21.148561 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:21.199228 1076050 cri.go:89] found id: ""
	I0127 15:42:21.199262 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.199273 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:21.199282 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:21.199353 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:21.259122 1076050 cri.go:89] found id: ""
	I0127 15:42:21.259156 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.259167 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:21.259175 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:21.259249 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:21.316242 1076050 cri.go:89] found id: ""
	I0127 15:42:21.316288 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.316300 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:21.316309 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:21.316378 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:21.360071 1076050 cri.go:89] found id: ""
	I0127 15:42:21.360104 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.360116 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:21.360125 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:21.360190 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:21.405056 1076050 cri.go:89] found id: ""
	I0127 15:42:21.405088 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.405099 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:21.405112 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:21.405129 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:21.419657 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:21.419688 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:21.495931 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:21.495957 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:21.495973 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:21.578029 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:21.578075 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:21.626705 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:21.626742 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:24.180267 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:24.193848 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:24.193927 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:24.232734 1076050 cri.go:89] found id: ""
	I0127 15:42:24.232767 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.232778 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:24.232787 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:24.232855 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:24.274373 1076050 cri.go:89] found id: ""
	I0127 15:42:24.274410 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.274421 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:24.274430 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:24.274486 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:24.314420 1076050 cri.go:89] found id: ""
	I0127 15:42:24.314449 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.314459 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:24.314469 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:24.314533 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:24.353247 1076050 cri.go:89] found id: ""
	I0127 15:42:24.353284 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.353302 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:24.353311 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:24.353380 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:24.395518 1076050 cri.go:89] found id: ""
	I0127 15:42:24.395545 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.395556 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:24.395564 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:24.395630 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:24.433954 1076050 cri.go:89] found id: ""
	I0127 15:42:24.433988 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.433999 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:24.434008 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:24.434078 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:24.475406 1076050 cri.go:89] found id: ""
	I0127 15:42:24.475438 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.475451 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:24.475460 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:24.475530 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:24.511024 1076050 cri.go:89] found id: ""
	I0127 15:42:24.511062 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.511074 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:24.511086 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:24.511105 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:24.585723 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:24.585746 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:24.585766 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:24.666956 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:24.666997 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:24.707929 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:24.707953 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:24.761870 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:24.761906 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:27.276721 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:27.292246 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:27.292341 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:27.332682 1076050 cri.go:89] found id: ""
	I0127 15:42:27.332715 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.332725 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:27.332733 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:27.332804 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:27.368942 1076050 cri.go:89] found id: ""
	I0127 15:42:27.368975 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.368988 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:27.368997 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:27.369083 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:27.406074 1076050 cri.go:89] found id: ""
	I0127 15:42:27.406116 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.406133 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:27.406141 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:27.406195 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:27.443019 1076050 cri.go:89] found id: ""
	I0127 15:42:27.443049 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.443061 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:27.443069 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:27.443136 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:27.478322 1076050 cri.go:89] found id: ""
	I0127 15:42:27.478359 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.478370 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:27.478380 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:27.478463 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:27.517749 1076050 cri.go:89] found id: ""
	I0127 15:42:27.517781 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.517793 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:27.517802 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:27.517868 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:27.556151 1076050 cri.go:89] found id: ""
	I0127 15:42:27.556182 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.556191 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:27.556197 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:27.556260 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:27.594607 1076050 cri.go:89] found id: ""
	I0127 15:42:27.594638 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.594646 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:27.594656 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:27.594666 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:27.675142 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:27.675184 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:27.719306 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:27.719341 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:27.771036 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:27.771076 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:27.785422 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:27.785451 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:27.863147 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:30.364006 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:30.378275 1076050 kubeadm.go:597] duration metric: took 4m3.244067669s to restartPrimaryControlPlane
	W0127 15:42:30.378392 1076050 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 15:42:30.378427 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:42:32.324859 1076050 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.946405854s)
	I0127 15:42:32.324949 1076050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:42:32.342099 1076050 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:42:32.353110 1076050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:42:32.365238 1076050 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:42:32.365259 1076050 kubeadm.go:157] found existing configuration files:
	
	I0127 15:42:32.365309 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:42:32.376623 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:42:32.376679 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:42:32.387533 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:42:32.397645 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:42:32.397706 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:42:32.409015 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:42:32.420172 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:42:32.420236 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:42:32.430688 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:42:32.441797 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:42:32.441856 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:42:32.452009 1076050 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:42:32.678031 1076050 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:44:29.249145 1076050 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 15:44:29.249258 1076050 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 15:44:29.250830 1076050 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 15:44:29.250891 1076050 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:44:29.251016 1076050 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:44:29.251168 1076050 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:44:29.251317 1076050 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 15:44:29.251390 1076050 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:44:29.253163 1076050 out.go:235]   - Generating certificates and keys ...
	I0127 15:44:29.253266 1076050 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:44:29.253389 1076050 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:44:29.253470 1076050 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:44:29.253522 1076050 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:44:29.253581 1076050 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:44:29.253626 1076050 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:44:29.253704 1076050 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:44:29.253772 1076050 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:44:29.253864 1076050 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:44:29.253956 1076050 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:44:29.254008 1076050 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:44:29.254112 1076050 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:44:29.254215 1076050 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:44:29.254305 1076050 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:44:29.254391 1076050 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:44:29.254466 1076050 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:44:29.254625 1076050 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:44:29.254763 1076050 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:44:29.254826 1076050 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:44:29.254989 1076050 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:44:29.256624 1076050 out.go:235]   - Booting up control plane ...
	I0127 15:44:29.256744 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:44:29.256829 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:44:29.256905 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:44:29.257025 1076050 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:44:29.257228 1076050 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 15:44:29.257290 1076050 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 15:44:29.257373 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.257657 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.257767 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.257963 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.258031 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.258254 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.258355 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.258591 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.258669 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.258862 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.258871 1076050 kubeadm.go:310] 
	I0127 15:44:29.258904 1076050 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 15:44:29.258972 1076050 kubeadm.go:310] 		timed out waiting for the condition
	I0127 15:44:29.258989 1076050 kubeadm.go:310] 
	I0127 15:44:29.259027 1076050 kubeadm.go:310] 	This error is likely caused by:
	I0127 15:44:29.259057 1076050 kubeadm.go:310] 		- The kubelet is not running
	I0127 15:44:29.259205 1076050 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 15:44:29.259221 1076050 kubeadm.go:310] 
	I0127 15:44:29.259358 1076050 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 15:44:29.259391 1076050 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 15:44:29.259444 1076050 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 15:44:29.259459 1076050 kubeadm.go:310] 
	I0127 15:44:29.259593 1076050 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 15:44:29.259701 1076050 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 15:44:29.259710 1076050 kubeadm.go:310] 
	I0127 15:44:29.259818 1076050 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 15:44:29.259940 1076050 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 15:44:29.260041 1076050 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 15:44:29.260150 1076050 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 15:44:29.260179 1076050 kubeadm.go:310] 
	W0127 15:44:29.260362 1076050 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 15:44:29.260421 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:44:29.751111 1076050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:44:29.767368 1076050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:44:29.778471 1076050 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:44:29.778498 1076050 kubeadm.go:157] found existing configuration files:
	
	I0127 15:44:29.778554 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:44:29.789258 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:44:29.789331 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:44:29.799796 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:44:29.809761 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:44:29.809824 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:44:29.819822 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:44:29.829277 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:44:29.829350 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:44:29.840607 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:44:29.850589 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:44:29.850656 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:44:29.860352 1076050 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:44:29.931615 1076050 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 15:44:29.931737 1076050 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:44:30.090907 1076050 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:44:30.091038 1076050 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:44:30.091180 1076050 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 15:44:30.288545 1076050 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:44:30.290548 1076050 out.go:235]   - Generating certificates and keys ...
	I0127 15:44:30.290678 1076050 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:44:30.290777 1076050 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:44:30.290899 1076050 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:44:30.290993 1076050 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:44:30.291119 1076050 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:44:30.291213 1076050 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:44:30.291312 1076050 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:44:30.291399 1076050 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:44:30.291523 1076050 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:44:30.291640 1076050 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:44:30.291718 1076050 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:44:30.291806 1076050 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:44:30.471428 1076050 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:44:30.705804 1076050 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:44:30.959802 1076050 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:44:31.149201 1076050 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:44:31.173695 1076050 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:44:31.174653 1076050 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:44:31.174752 1076050 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:44:31.342124 1076050 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:44:31.344077 1076050 out.go:235]   - Booting up control plane ...
	I0127 15:44:31.344184 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:44:31.348014 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:44:31.349159 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:44:31.349960 1076050 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:44:31.352168 1076050 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 15:45:11.354910 1076050 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 15:45:11.355380 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:45:11.355582 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:45:16.356239 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:45:16.356487 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:45:26.357276 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:45:26.357605 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:45:46.358046 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:45:46.358293 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:46:26.356549 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:46:26.356813 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:46:26.356830 1076050 kubeadm.go:310] 
	I0127 15:46:26.356897 1076050 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 15:46:26.356938 1076050 kubeadm.go:310] 		timed out waiting for the condition
	I0127 15:46:26.356949 1076050 kubeadm.go:310] 
	I0127 15:46:26.357026 1076050 kubeadm.go:310] 	This error is likely caused by:
	I0127 15:46:26.357106 1076050 kubeadm.go:310] 		- The kubelet is not running
	I0127 15:46:26.357302 1076050 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 15:46:26.357336 1076050 kubeadm.go:310] 
	I0127 15:46:26.357498 1076050 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 15:46:26.357548 1076050 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 15:46:26.357607 1076050 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 15:46:26.357624 1076050 kubeadm.go:310] 
	I0127 15:46:26.357766 1076050 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 15:46:26.357862 1076050 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 15:46:26.357878 1076050 kubeadm.go:310] 
	I0127 15:46:26.358043 1076050 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 15:46:26.358166 1076050 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 15:46:26.358290 1076050 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 15:46:26.358368 1076050 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 15:46:26.358379 1076050 kubeadm.go:310] 
	I0127 15:46:26.358971 1076050 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:46:26.359102 1076050 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 15:46:26.359219 1076050 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 15:46:26.359281 1076050 kubeadm.go:394] duration metric: took 7m59.27977519s to StartCluster
	I0127 15:46:26.359443 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:46:26.359522 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:46:26.408713 1076050 cri.go:89] found id: ""
	I0127 15:46:26.408752 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.408764 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:46:26.408772 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:46:26.408832 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:46:26.449156 1076050 cri.go:89] found id: ""
	I0127 15:46:26.449190 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.449200 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:46:26.449208 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:46:26.449306 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:46:26.487786 1076050 cri.go:89] found id: ""
	I0127 15:46:26.487812 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.487820 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:46:26.487827 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:46:26.487876 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:46:26.546745 1076050 cri.go:89] found id: ""
	I0127 15:46:26.546772 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.546782 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:46:26.546791 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:46:26.546855 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:46:26.584262 1076050 cri.go:89] found id: ""
	I0127 15:46:26.584300 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.584308 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:46:26.584316 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:46:26.584385 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:46:26.622575 1076050 cri.go:89] found id: ""
	I0127 15:46:26.622608 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.622617 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:46:26.622623 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:46:26.622683 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:46:26.660928 1076050 cri.go:89] found id: ""
	I0127 15:46:26.660955 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.660964 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:46:26.660970 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:46:26.661062 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:46:26.698084 1076050 cri.go:89] found id: ""
	I0127 15:46:26.698116 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.698125 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:46:26.698139 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:46:26.698151 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:46:26.742459 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:46:26.742486 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:46:26.797935 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:46:26.797977 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:46:26.814213 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:46:26.814248 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:46:26.903335 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:46:26.903373 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:46:26.903392 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0127 15:46:27.016392 1076050 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 15:46:27.016470 1076050 out.go:270] * 
	* 
	W0127 15:46:27.016547 1076050 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 15:46:27.016561 1076050 out.go:270] * 
	* 
	W0127 15:46:27.017322 1076050 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 15:46:27.020682 1076050 out.go:201] 
	W0127 15:46:27.022217 1076050 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 15:46:27.022269 1076050 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 15:46:27.022288 1076050 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 15:46:27.023966 1076050 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-405706 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405706 -n old-k8s-version-405706
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405706 -n old-k8s-version-405706: exit status 2 (267.479482ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-405706 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-405706 logs -n 25: (1.110868147s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-230388 sudo cat                              | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-230388 sudo                                  | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-230388 sudo                                  | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-230388 sudo                                  | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-230388 sudo find                             | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-230388 sudo crio                             | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-230388                                       | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-147179 | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | disable-driver-mounts-147179                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:33 UTC |
	|         | default-k8s-diff-port-912913                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-458006             | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-458006                                   | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-349782            | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-349782                                  | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:35 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-912913  | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC | 27 Jan 25 15:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC | 27 Jan 25 15:35 UTC |
	|         | default-k8s-diff-port-912913                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-458006                  | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC | 27 Jan 25 15:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-458006                                   | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-349782                 | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC | 27 Jan 25 15:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-349782                                  | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-912913       | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC | 27 Jan 25 15:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC |                     |
	|         | default-k8s-diff-port-912913                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-405706        | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:36 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-405706                              | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:37 UTC | 27 Jan 25 15:37 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-405706             | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:37 UTC | 27 Jan 25 15:37 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-405706                              | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 15:37:58
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 15:37:58.460225 1076050 out.go:345] Setting OutFile to fd 1 ...
	I0127 15:37:58.460642 1076050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:37:58.460654 1076050 out.go:358] Setting ErrFile to fd 2...
	I0127 15:37:58.460661 1076050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:37:58.461077 1076050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 15:37:58.462086 1076050 out.go:352] Setting JSON to false
	I0127 15:37:58.463486 1076050 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":22825,"bootTime":1737969453,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 15:37:58.463630 1076050 start.go:139] virtualization: kvm guest
	I0127 15:37:58.465774 1076050 out.go:177] * [old-k8s-version-405706] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 15:37:58.467019 1076050 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 15:37:58.467027 1076050 notify.go:220] Checking for updates...
	I0127 15:37:58.469366 1076050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 15:37:58.470862 1076050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:37:58.472239 1076050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:37:58.473602 1076050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 15:37:58.474992 1076050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 15:37:58.477098 1076050 config.go:182] Loaded profile config "old-k8s-version-405706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 15:37:58.477731 1076050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:37:58.477799 1076050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:37:58.494965 1076050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44031
	I0127 15:37:58.495385 1076050 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:37:58.495879 1076050 main.go:141] libmachine: Using API Version  1
	I0127 15:37:58.495901 1076050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:37:58.496287 1076050 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:37:58.496581 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:37:58.498539 1076050 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 15:37:58.499766 1076050 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 15:37:58.500092 1076050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:37:58.500132 1076050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:37:58.516530 1076050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38979
	I0127 15:37:58.517083 1076050 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:37:58.517634 1076050 main.go:141] libmachine: Using API Version  1
	I0127 15:37:58.517666 1076050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:37:58.518105 1076050 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:37:58.518356 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:37:58.558744 1076050 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 15:37:58.560294 1076050 start.go:297] selected driver: kvm2
	I0127 15:37:58.560309 1076050 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-4
05706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:37:58.560451 1076050 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 15:37:58.561175 1076050 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:37:58.561284 1076050 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 15:37:58.579056 1076050 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 15:37:58.579656 1076050 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 15:37:58.579710 1076050 cni.go:84] Creating CNI manager for ""
	I0127 15:37:58.579776 1076050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:37:58.579842 1076050 start.go:340] cluster config:
	{Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:37:58.580020 1076050 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:37:58.581716 1076050 out.go:177] * Starting "old-k8s-version-405706" primary control-plane node in "old-k8s-version-405706" cluster
	I0127 15:37:58.582897 1076050 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 15:37:58.582967 1076050 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 15:37:58.582980 1076050 cache.go:56] Caching tarball of preloaded images
	I0127 15:37:58.583091 1076050 preload.go:172] Found /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 15:37:58.583107 1076050 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 15:37:58.583235 1076050 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/config.json ...
	I0127 15:37:58.583561 1076050 start.go:360] acquireMachinesLock for old-k8s-version-405706: {Name:mk884e36253ca066a698970989f20649e5f9cbef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 15:37:58.583628 1076050 start.go:364] duration metric: took 38.743µs to acquireMachinesLock for "old-k8s-version-405706"
	I0127 15:37:58.583652 1076050 start.go:96] Skipping create...Using existing machine configuration
	I0127 15:37:58.583664 1076050 fix.go:54] fixHost starting: 
	I0127 15:37:58.584041 1076050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:37:58.584088 1076050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:37:58.599995 1076050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I0127 15:37:58.600476 1076050 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:37:58.600955 1076050 main.go:141] libmachine: Using API Version  1
	I0127 15:37:58.600978 1076050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:37:58.601364 1076050 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:37:58.601600 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:37:58.601761 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetState
	I0127 15:37:58.603539 1076050 fix.go:112] recreateIfNeeded on old-k8s-version-405706: state=Stopped err=<nil>
	I0127 15:37:58.603586 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	W0127 15:37:58.603763 1076050 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 15:37:58.606243 1076050 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-405706" ...
	I0127 15:37:54.081369 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:56.581569 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:58.582848 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:59.787393 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:01.789117 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:58.529695 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:01.029818 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:58.607570 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .Start
	I0127 15:37:58.607751 1076050 main.go:141] libmachine: (old-k8s-version-405706) starting domain...
	I0127 15:37:58.607775 1076050 main.go:141] libmachine: (old-k8s-version-405706) ensuring networks are active...
	I0127 15:37:58.608545 1076050 main.go:141] libmachine: (old-k8s-version-405706) Ensuring network default is active
	I0127 15:37:58.608940 1076050 main.go:141] libmachine: (old-k8s-version-405706) Ensuring network mk-old-k8s-version-405706 is active
	I0127 15:37:58.609360 1076050 main.go:141] libmachine: (old-k8s-version-405706) getting domain XML...
	I0127 15:37:58.610094 1076050 main.go:141] libmachine: (old-k8s-version-405706) creating domain...
	I0127 15:37:59.916140 1076050 main.go:141] libmachine: (old-k8s-version-405706) waiting for IP...
	I0127 15:37:59.917074 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:37:59.917644 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:37:59.917771 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:37:59.917639 1076085 retry.go:31] will retry after 260.191068ms: waiting for domain to come up
	I0127 15:38:00.180221 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:00.180922 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:00.180948 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:00.180879 1076085 retry.go:31] will retry after 359.566395ms: waiting for domain to come up
	I0127 15:38:00.542429 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:00.543056 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:00.543097 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:00.542942 1076085 retry.go:31] will retry after 454.555688ms: waiting for domain to come up
	I0127 15:38:00.999387 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:00.999926 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:00.999963 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:00.999888 1076085 retry.go:31] will retry after 559.246215ms: waiting for domain to come up
	I0127 15:38:01.560836 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:01.561528 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:01.561554 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:01.561489 1076085 retry.go:31] will retry after 552.626147ms: waiting for domain to come up
	I0127 15:38:02.116418 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:02.116873 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:02.116914 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:02.116852 1076085 retry.go:31] will retry after 808.293412ms: waiting for domain to come up
	I0127 15:38:02.927177 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:02.927742 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:02.927794 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:02.927707 1076085 retry.go:31] will retry after 740.958034ms: waiting for domain to come up
	I0127 15:38:00.583568 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:03.081418 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:04.290371 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:06.787711 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:03.529199 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:05.530455 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:03.670221 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:03.670746 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:03.670778 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:03.670698 1076085 retry.go:31] will retry after 1.365040284s: waiting for domain to come up
	I0127 15:38:05.038371 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:05.039049 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:05.039084 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:05.039001 1076085 retry.go:31] will retry after 1.410803026s: waiting for domain to come up
	I0127 15:38:06.451661 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:06.452329 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:06.452353 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:06.452303 1076085 retry.go:31] will retry after 1.899894945s: waiting for domain to come up
	I0127 15:38:08.354209 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:08.354816 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:08.354843 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:08.354774 1076085 retry.go:31] will retry after 2.020609979s: waiting for domain to come up
	I0127 15:38:05.581452 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:07.587869 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:08.788730 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:11.289383 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:07.534482 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:10.029370 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:10.377713 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:10.378246 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:10.378288 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:10.378203 1076085 retry.go:31] will retry after 2.469378968s: waiting for domain to come up
	I0127 15:38:12.850116 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:12.850624 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:12.850678 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:12.850598 1076085 retry.go:31] will retry after 4.322374162s: waiting for domain to come up
	I0127 15:38:10.085186 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:12.580963 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:13.788914 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:16.287163 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:12.528917 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:14.531412 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:17.028589 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:17.175528 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.176129 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has current primary IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.176161 1076050 main.go:141] libmachine: (old-k8s-version-405706) found domain IP: 192.168.72.49
	I0127 15:38:17.176174 1076050 main.go:141] libmachine: (old-k8s-version-405706) reserving static IP address...
	I0127 15:38:17.176643 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "old-k8s-version-405706", mac: "52:54:00:c3:d6:50", ip: "192.168.72.49"} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.176678 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | skip adding static IP to network mk-old-k8s-version-405706 - found existing host DHCP lease matching {name: "old-k8s-version-405706", mac: "52:54:00:c3:d6:50", ip: "192.168.72.49"}
	I0127 15:38:17.176696 1076050 main.go:141] libmachine: (old-k8s-version-405706) reserved static IP address 192.168.72.49 for domain old-k8s-version-405706
	I0127 15:38:17.176711 1076050 main.go:141] libmachine: (old-k8s-version-405706) waiting for SSH...
	I0127 15:38:17.176725 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | Getting to WaitForSSH function...
	I0127 15:38:17.179302 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.179688 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.179730 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.179875 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | Using SSH client type: external
	I0127 15:38:17.179902 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | Using SSH private key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa (-rw-------)
	I0127 15:38:17.179949 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 15:38:17.179964 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | About to run SSH command:
	I0127 15:38:17.179977 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | exit 0
	I0127 15:38:17.309257 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | SSH cmd err, output: <nil>: 
	I0127 15:38:17.309663 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetConfigRaw
	I0127 15:38:17.310369 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:38:17.313129 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.313573 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.313604 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.313898 1076050 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/config.json ...
	I0127 15:38:17.314149 1076050 machine.go:93] provisionDockerMachine start ...
	I0127 15:38:17.314178 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:17.314424 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.317176 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.317563 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.317591 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.317822 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:17.318108 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.318299 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.318460 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:17.318635 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:17.318853 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:17.318864 1076050 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 15:38:17.433866 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 15:38:17.433903 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetMachineName
	I0127 15:38:17.434143 1076050 buildroot.go:166] provisioning hostname "old-k8s-version-405706"
	I0127 15:38:17.434203 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetMachineName
	I0127 15:38:17.434415 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.437023 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.437426 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.437473 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.437592 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:17.437754 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.437908 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.438061 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:17.438217 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:17.438406 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:17.438418 1076050 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-405706 && echo "old-k8s-version-405706" | sudo tee /etc/hostname
	I0127 15:38:17.569398 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-405706
	
	I0127 15:38:17.569429 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.572466 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.572839 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.572882 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.573066 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:17.573312 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.573557 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.573726 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:17.573924 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:17.574106 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:17.574123 1076050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-405706' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-405706/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-405706' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 15:38:17.705253 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 15:38:17.705300 1076050 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20321-1005652/.minikube CaCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20321-1005652/.minikube}
	I0127 15:38:17.705320 1076050 buildroot.go:174] setting up certificates
	I0127 15:38:17.705333 1076050 provision.go:84] configureAuth start
	I0127 15:38:17.705346 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetMachineName
	I0127 15:38:17.705683 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:38:17.708834 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.709332 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.709361 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.709583 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.712195 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.712714 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.712755 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.712924 1076050 provision.go:143] copyHostCerts
	I0127 15:38:17.712990 1076050 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem, removing ...
	I0127 15:38:17.713017 1076050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem
	I0127 15:38:17.713095 1076050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem (1078 bytes)
	I0127 15:38:17.713241 1076050 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem, removing ...
	I0127 15:38:17.713259 1076050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem
	I0127 15:38:17.713326 1076050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem (1123 bytes)
	I0127 15:38:17.713446 1076050 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem, removing ...
	I0127 15:38:17.713460 1076050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem
	I0127 15:38:17.713500 1076050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem (1679 bytes)
	I0127 15:38:17.713572 1076050 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-405706 san=[127.0.0.1 192.168.72.49 localhost minikube old-k8s-version-405706]
	I0127 15:38:17.976673 1076050 provision.go:177] copyRemoteCerts
	I0127 15:38:17.976750 1076050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 15:38:17.976777 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.979513 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.979876 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.979909 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.980065 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:17.980267 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.980415 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:17.980554 1076050 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:38:18.068921 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 15:38:18.098428 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 15:38:18.126079 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 15:38:18.152193 1076050 provision.go:87] duration metric: took 446.842204ms to configureAuth
	I0127 15:38:18.152233 1076050 buildroot.go:189] setting minikube options for container-runtime
	I0127 15:38:18.152508 1076050 config.go:182] Loaded profile config "old-k8s-version-405706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 15:38:18.152613 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.155796 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.156222 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.156254 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.156368 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.156577 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.156774 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.156938 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.157163 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:18.157375 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:18.157392 1076050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 15:38:18.414989 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 15:38:18.415023 1076050 machine.go:96] duration metric: took 1.100855468s to provisionDockerMachine
	I0127 15:38:18.415039 1076050 start.go:293] postStartSetup for "old-k8s-version-405706" (driver="kvm2")
	I0127 15:38:18.415054 1076050 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 15:38:18.415078 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.415462 1076050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 15:38:18.415499 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.418353 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.418778 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.418818 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.418925 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.419129 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.419322 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.419440 1076050 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:38:14.581198 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:16.581669 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:18.508389 1076050 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 15:38:18.513026 1076050 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 15:38:18.513065 1076050 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/addons for local assets ...
	I0127 15:38:18.513137 1076050 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/files for local assets ...
	I0127 15:38:18.513210 1076050 filesync.go:149] local asset: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem -> 10128162.pem in /etc/ssl/certs
	I0127 15:38:18.513309 1076050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 15:38:18.523553 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:38:18.550472 1076050 start.go:296] duration metric: took 135.415525ms for postStartSetup
	I0127 15:38:18.550553 1076050 fix.go:56] duration metric: took 19.966860382s for fixHost
	I0127 15:38:18.550584 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.553490 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.553896 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.553956 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.554089 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.554297 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.554458 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.554585 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.554806 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:18.555042 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:18.555058 1076050 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 15:38:18.670326 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737992298.641469796
	
	I0127 15:38:18.670351 1076050 fix.go:216] guest clock: 1737992298.641469796
	I0127 15:38:18.670358 1076050 fix.go:229] Guest: 2025-01-27 15:38:18.641469796 +0000 UTC Remote: 2025-01-27 15:38:18.550560739 +0000 UTC m=+20.130793423 (delta=90.909057ms)
	I0127 15:38:18.670379 1076050 fix.go:200] guest clock delta is within tolerance: 90.909057ms
	I0127 15:38:18.670384 1076050 start.go:83] releasing machines lock for "old-k8s-version-405706", held for 20.08674208s
	I0127 15:38:18.670400 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.670689 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:38:18.673557 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.673931 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.673967 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.674112 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.674583 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.674751 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.674869 1076050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 15:38:18.674916 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.674944 1076050 ssh_runner.go:195] Run: cat /version.json
	I0127 15:38:18.674975 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.677875 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.678255 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.678395 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.678427 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.678595 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.678749 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.678783 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.678819 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.679001 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.679093 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.679181 1076050 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:38:18.679243 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.681217 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.681729 1076050 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:38:18.787808 1076050 ssh_runner.go:195] Run: systemctl --version
	I0127 15:38:18.794834 1076050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 15:38:18.943494 1076050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 15:38:18.950152 1076050 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 15:38:18.950269 1076050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 15:38:18.967110 1076050 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 15:38:18.967141 1076050 start.go:495] detecting cgroup driver to use...
	I0127 15:38:18.967215 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 15:38:18.985631 1076050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 15:38:19.002007 1076050 docker.go:217] disabling cri-docker service (if available) ...
	I0127 15:38:19.002098 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 15:38:19.015975 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 15:38:19.030630 1076050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 15:38:19.167900 1076050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 15:38:19.339595 1076050 docker.go:233] disabling docker service ...
	I0127 15:38:19.339680 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 15:38:19.355894 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 15:38:19.370010 1076050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 15:38:19.503289 1076050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 15:38:19.640006 1076050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 15:38:19.656134 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 15:38:19.676136 1076050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 15:38:19.676207 1076050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:38:19.688127 1076050 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 15:38:19.688235 1076050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:38:19.700866 1076050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:38:19.712387 1076050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:38:19.724833 1076050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 15:38:19.736825 1076050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 15:38:19.747906 1076050 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 15:38:19.747976 1076050 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 15:38:19.761744 1076050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 15:38:19.771558 1076050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:38:19.891616 1076050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 15:38:19.987396 1076050 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 15:38:19.987496 1076050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 15:38:19.993148 1076050 start.go:563] Will wait 60s for crictl version
	I0127 15:38:19.993218 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:19.997232 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 15:38:20.047289 1076050 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 15:38:20.047381 1076050 ssh_runner.go:195] Run: crio --version
	I0127 15:38:20.080844 1076050 ssh_runner.go:195] Run: crio --version
	I0127 15:38:20.113498 1076050 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 15:38:18.287782 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:20.288830 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:19.029508 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:21.031738 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:20.115011 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:38:20.118087 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:20.118526 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:20.118554 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:20.118911 1076050 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 15:38:20.123918 1076050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:38:20.137420 1076050 kubeadm.go:883] updating cluster {Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 15:38:20.137608 1076050 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 15:38:20.137679 1076050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:38:20.203088 1076050 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 15:38:20.203162 1076050 ssh_runner.go:195] Run: which lz4
	I0127 15:38:20.207834 1076050 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 15:38:20.212511 1076050 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 15:38:20.212550 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 15:38:21.944361 1076050 crio.go:462] duration metric: took 1.736570115s to copy over tarball
	I0127 15:38:21.944459 1076050 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 15:38:19.082119 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:21.583597 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:22.786853 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:24.787379 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:26.788848 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:23.529051 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:25.530450 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:25.017812 1076050 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.073312095s)
	I0127 15:38:25.017848 1076050 crio.go:469] duration metric: took 3.07344607s to extract the tarball
	I0127 15:38:25.017859 1076050 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 15:38:25.068609 1076050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:38:25.107660 1076050 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 15:38:25.107705 1076050 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 15:38:25.107797 1076050 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.107831 1076050 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.107843 1076050 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 15:38:25.107782 1076050 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:38:25.107866 1076050 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.107793 1076050 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.107810 1076050 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.107872 1076050 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.109711 1076050 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.109716 1076050 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.109736 1076050 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:38:25.109749 1076050 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 15:38:25.109765 1076050 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.109711 1076050 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.109717 1076050 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.109721 1076050 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.319866 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 15:38:25.320854 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.329418 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.331454 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.331999 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.338125 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.346119 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.438398 1076050 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 15:38:25.438508 1076050 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 15:38:25.438596 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.485875 1076050 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 15:38:25.485939 1076050 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.486002 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.524177 1076050 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 15:38:25.524230 1076050 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.524284 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.533972 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:38:25.537150 1076050 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 15:38:25.537198 1076050 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.537239 1076050 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 15:38:25.537282 1076050 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.537306 1076050 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 15:38:25.537329 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.537256 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.537388 1076050 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 15:38:25.537334 1076050 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.537413 1076050 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.537430 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.537437 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 15:38:25.537438 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.537484 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.537505 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.730245 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.730334 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.730438 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.730438 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 15:38:25.730510 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.730615 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.730667 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.896539 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.896835 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.896864 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 15:38:25.896869 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.896952 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.896990 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.897080 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:26.067159 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 15:38:26.067203 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:26.067293 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 15:38:26.078064 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:26.078128 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 15:38:26.078233 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:26.078345 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 15:38:26.172870 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 15:38:26.172975 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 15:38:26.177848 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 15:38:26.177943 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 15:38:26.177981 1076050 cache_images.go:92] duration metric: took 1.070258879s to LoadCachedImages
	W0127 15:38:26.178068 1076050 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0127 15:38:26.178082 1076050 kubeadm.go:934] updating node { 192.168.72.49 8443 v1.20.0 crio true true} ...
	I0127 15:38:26.178211 1076050 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-405706 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 15:38:26.178294 1076050 ssh_runner.go:195] Run: crio config
	I0127 15:38:26.228357 1076050 cni.go:84] Creating CNI manager for ""
	I0127 15:38:26.228379 1076050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:38:26.228388 1076050 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 15:38:26.228409 1076050 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.49 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-405706 NodeName:old-k8s-version-405706 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 15:38:26.228568 1076050 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-405706"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 15:38:26.228657 1076050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 15:38:26.240731 1076050 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 15:38:26.240809 1076050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 15:38:26.251662 1076050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0127 15:38:26.270153 1076050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 15:38:26.292045 1076050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0127 15:38:26.312171 1076050 ssh_runner.go:195] Run: grep 192.168.72.49	control-plane.minikube.internal$ /etc/hosts
	I0127 15:38:26.316436 1076050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:38:26.330437 1076050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:38:26.453879 1076050 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:38:26.473364 1076050 certs.go:68] Setting up /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706 for IP: 192.168.72.49
	I0127 15:38:26.473395 1076050 certs.go:194] generating shared ca certs ...
	I0127 15:38:26.473419 1076050 certs.go:226] acquiring lock for ca certs: {Name:mk0f815247e4bef2bde3ddcae95639d1cd9cab24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:38:26.473672 1076050 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key
	I0127 15:38:26.473739 1076050 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key
	I0127 15:38:26.473755 1076050 certs.go:256] generating profile certs ...
	I0127 15:38:26.473909 1076050 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.key
	I0127 15:38:26.473993 1076050 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.key.8816e362
	I0127 15:38:26.474047 1076050 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.key
	I0127 15:38:26.474215 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem (1338 bytes)
	W0127 15:38:26.474262 1076050 certs.go:480] ignoring /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816_empty.pem, impossibly tiny 0 bytes
	I0127 15:38:26.474272 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 15:38:26.474304 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem (1078 bytes)
	I0127 15:38:26.474335 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem (1123 bytes)
	I0127 15:38:26.474377 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem (1679 bytes)
	I0127 15:38:26.474434 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:38:26.475310 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 15:38:26.528151 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 15:38:26.569116 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 15:38:26.612791 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 15:38:26.643362 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 15:38:26.682611 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 15:38:26.736411 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 15:38:26.766171 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 15:38:26.806820 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /usr/share/ca-certificates/10128162.pem (1708 bytes)
	I0127 15:38:26.835935 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 15:38:26.862752 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem --> /usr/share/ca-certificates/1012816.pem (1338 bytes)
	I0127 15:38:26.890713 1076050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 15:38:26.910713 1076050 ssh_runner.go:195] Run: openssl version
	I0127 15:38:26.917762 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1012816.pem && ln -fs /usr/share/ca-certificates/1012816.pem /etc/ssl/certs/1012816.pem"
	I0127 15:38:26.930093 1076050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1012816.pem
	I0127 15:38:26.935103 1076050 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 14:20 /usr/share/ca-certificates/1012816.pem
	I0127 15:38:26.935187 1076050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1012816.pem
	I0127 15:38:26.941655 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1012816.pem /etc/ssl/certs/51391683.0"
	I0127 15:38:26.955281 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10128162.pem && ln -fs /usr/share/ca-certificates/10128162.pem /etc/ssl/certs/10128162.pem"
	I0127 15:38:26.969095 1076050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10128162.pem
	I0127 15:38:26.974104 1076050 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 14:20 /usr/share/ca-certificates/10128162.pem
	I0127 15:38:26.974177 1076050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10128162.pem
	I0127 15:38:26.980428 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10128162.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 15:38:26.992636 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 15:38:27.006632 1076050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:38:27.011797 1076050 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 14:06 /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:38:27.011873 1076050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:38:27.018384 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 15:38:27.032120 1076050 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 15:38:27.037441 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 15:38:27.044020 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 15:38:27.050856 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 15:38:27.057896 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 15:38:27.065183 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 15:38:27.072632 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 15:38:27.079504 1076050 kubeadm.go:392] StartCluster: {Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:38:27.079605 1076050 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 15:38:27.079670 1076050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:38:27.122961 1076050 cri.go:89] found id: ""
	I0127 15:38:27.123034 1076050 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 15:38:27.134170 1076050 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 15:38:27.134194 1076050 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 15:38:27.134254 1076050 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 15:38:27.146526 1076050 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 15:38:27.147269 1076050 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-405706" does not appear in /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:38:27.147608 1076050 kubeconfig.go:62] /home/jenkins/minikube-integration/20321-1005652/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-405706" cluster setting kubeconfig missing "old-k8s-version-405706" context setting]
	I0127 15:38:27.148175 1076050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:38:27.218301 1076050 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 15:38:27.230797 1076050 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.49
	I0127 15:38:27.230842 1076050 kubeadm.go:1160] stopping kube-system containers ...
	I0127 15:38:27.230858 1076050 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 15:38:27.230918 1076050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:38:27.273845 1076050 cri.go:89] found id: ""
	I0127 15:38:27.273935 1076050 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 15:38:27.295864 1076050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:38:27.308596 1076050 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:38:27.308616 1076050 kubeadm.go:157] found existing configuration files:
	
	I0127 15:38:27.308663 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:38:27.319955 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:38:27.320015 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:38:27.331528 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:38:27.342177 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:38:27.342248 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:38:27.352666 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:38:27.364010 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:38:27.364077 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:38:27.375886 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:38:27.386069 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:38:27.386141 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:38:27.398977 1076050 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:38:27.410085 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:27.579462 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:28.350228 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:24.081574 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:26.084881 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:28.581361 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:29.287085 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:31.288269 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:28.030083 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:30.030174 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:28.604472 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:28.715137 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:28.812566 1076050 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:38:28.812663 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:29.312952 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:29.812784 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:30.313395 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:30.813525 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:31.313773 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:31.813137 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:32.313501 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:32.813028 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:33.312894 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:31.080211 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:33.582580 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:33.788390 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:36.287173 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:32.529206 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:35.028518 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:37.031307 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:33.813345 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:34.313510 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:34.813678 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:35.313121 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:35.813541 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:36.312890 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:36.813411 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:37.313228 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:37.813599 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:38.313526 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:36.081107 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:38.582581 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:38.287892 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:40.787491 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:39.529329 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:42.028378 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:38.812744 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:39.313501 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:39.813568 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:40.313585 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:40.813078 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:41.312734 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:41.812823 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:42.312829 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:42.813108 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:43.312983 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:41.080457 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:43.082314 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:42.787697 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:45.287260 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:47.287367 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:44.028619 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:46.029083 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:43.813614 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:44.313522 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:44.813162 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:45.313000 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:45.813166 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:46.313147 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:46.812791 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:47.312810 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:47.812775 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:48.313432 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:45.581743 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:47.582153 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:49.287859 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:51.288012 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:48.029471 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:50.529718 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:48.813154 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:49.312838 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:49.813340 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:50.312925 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:50.813287 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:51.312785 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:51.813687 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:52.313111 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:52.812802 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:53.313097 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:50.081002 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:52.581311 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:53.288532 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:55.788221 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:53.028591 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:55.529910 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:53.813587 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:54.313181 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:54.812993 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:55.313464 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:55.813050 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:56.312920 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:56.813705 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:57.313622 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:57.812842 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:58.313381 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:54.581795 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:57.080722 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:58.288309 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:00.786850 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:58.028613 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:00.529908 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:58.812816 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:59.312817 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:59.813035 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:00.313444 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:00.813287 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:01.312763 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:01.813721 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:02.313131 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:02.813297 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:03.313697 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:59.581769 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:02.080943 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:02.787929 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:05.287833 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:07.287889 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:03.029275 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:05.029418 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:07.030052 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:03.813314 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:04.313147 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:04.813585 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:05.313388 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:05.813722 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:06.313190 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:06.812942 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:07.313516 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:07.813321 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:08.313684 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:04.081681 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:06.582635 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:09.289282 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:11.788208 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:09.528140 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:11.529355 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:08.813457 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:09.312972 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:09.812986 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:10.313838 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:10.813128 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:11.312866 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:11.812982 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:12.312768 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:12.813426 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:13.313370 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:09.080839 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:11.581560 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:14.287327 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:16.288546 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:13.529804 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:16.028749 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:13.812803 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:14.313174 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:14.813162 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:15.312724 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:15.813166 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:16.313662 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:16.813497 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:17.313422 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:17.813587 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:18.313749 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:14.080371 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:16.582575 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:18.584549 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:18.787976 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:20.788184 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:18.029709 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:20.529523 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:18.813301 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:19.313610 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:19.813293 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:20.313667 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:20.813161 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:21.313709 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:21.813699 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:22.313185 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:22.813328 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:23.313612 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:21.080013 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:23.080298 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:23.287582 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:25.787381 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:23.029776 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:25.529747 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:23.812846 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:24.313129 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:24.813728 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:25.313735 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:25.813439 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:26.313406 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:26.813597 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:27.313484 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:27.813672 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:28.313161 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:25.081823 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:27.581035 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:27.787632 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:30.287493 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:32.289889 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:27.530494 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:30.028046 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:32.030227 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:28.813541 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:28.813633 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:28.855334 1076050 cri.go:89] found id: ""
	I0127 15:39:28.855368 1076050 logs.go:282] 0 containers: []
	W0127 15:39:28.855376 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:28.855383 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:28.855466 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:28.892923 1076050 cri.go:89] found id: ""
	I0127 15:39:28.892959 1076050 logs.go:282] 0 containers: []
	W0127 15:39:28.892972 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:28.892980 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:28.893081 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:28.942133 1076050 cri.go:89] found id: ""
	I0127 15:39:28.942163 1076050 logs.go:282] 0 containers: []
	W0127 15:39:28.942187 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:28.942196 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:28.942261 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:28.980950 1076050 cri.go:89] found id: ""
	I0127 15:39:28.980978 1076050 logs.go:282] 0 containers: []
	W0127 15:39:28.980988 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:28.980995 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:28.981080 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:29.022166 1076050 cri.go:89] found id: ""
	I0127 15:39:29.022200 1076050 logs.go:282] 0 containers: []
	W0127 15:39:29.022209 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:29.022215 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:29.022269 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:29.060408 1076050 cri.go:89] found id: ""
	I0127 15:39:29.060439 1076050 logs.go:282] 0 containers: []
	W0127 15:39:29.060447 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:29.060454 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:29.060521 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:29.100890 1076050 cri.go:89] found id: ""
	I0127 15:39:29.100924 1076050 logs.go:282] 0 containers: []
	W0127 15:39:29.100935 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:29.100944 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:29.101075 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:29.139688 1076050 cri.go:89] found id: ""
	I0127 15:39:29.139720 1076050 logs.go:282] 0 containers: []
	W0127 15:39:29.139729 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:29.139741 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:29.139752 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:29.181255 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:29.181288 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:29.232218 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:29.232260 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:29.245853 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:29.245881 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:29.382461 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:29.382487 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:29.382501 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:31.957162 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:31.971225 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:31.971290 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:32.026501 1076050 cri.go:89] found id: ""
	I0127 15:39:32.026535 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.026546 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:32.026555 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:32.026624 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:32.066192 1076050 cri.go:89] found id: ""
	I0127 15:39:32.066232 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.066244 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:32.066253 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:32.066334 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:32.106017 1076050 cri.go:89] found id: ""
	I0127 15:39:32.106047 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.106056 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:32.106062 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:32.106130 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:32.146534 1076050 cri.go:89] found id: ""
	I0127 15:39:32.146565 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.146575 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:32.146581 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:32.146644 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:32.186982 1076050 cri.go:89] found id: ""
	I0127 15:39:32.187007 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.187016 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:32.187022 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:32.187077 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:32.229657 1076050 cri.go:89] found id: ""
	I0127 15:39:32.229685 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.229693 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:32.229700 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:32.229756 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:32.267228 1076050 cri.go:89] found id: ""
	I0127 15:39:32.267259 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.267268 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:32.267275 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:32.267340 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:32.305366 1076050 cri.go:89] found id: ""
	I0127 15:39:32.305394 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.305402 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:32.305412 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:32.305424 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:32.345293 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:32.345335 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:32.395863 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:32.395922 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:32.411092 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:32.411133 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:32.493214 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:32.493248 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:32.493266 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:30.082518 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:32.580263 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:34.787461 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:37.287358 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:34.530278 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:37.028574 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:35.077133 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:35.094000 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:35.094095 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:35.132448 1076050 cri.go:89] found id: ""
	I0127 15:39:35.132488 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.132500 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:35.132508 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:35.132583 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:35.167599 1076050 cri.go:89] found id: ""
	I0127 15:39:35.167632 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.167644 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:35.167653 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:35.167713 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:35.204383 1076050 cri.go:89] found id: ""
	I0127 15:39:35.204429 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.204438 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:35.204444 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:35.204503 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:35.241382 1076050 cri.go:89] found id: ""
	I0127 15:39:35.241411 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.241423 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:35.241431 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:35.241500 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:35.278253 1076050 cri.go:89] found id: ""
	I0127 15:39:35.278280 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.278289 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:35.278296 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:35.278357 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:35.320389 1076050 cri.go:89] found id: ""
	I0127 15:39:35.320418 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.320425 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:35.320432 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:35.320498 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:35.360563 1076050 cri.go:89] found id: ""
	I0127 15:39:35.360592 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.360604 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:35.360613 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:35.360670 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:35.396537 1076050 cri.go:89] found id: ""
	I0127 15:39:35.396580 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.396593 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:35.396609 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:35.396628 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:35.474518 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:35.474554 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:35.474575 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:35.554396 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:35.554445 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:35.599042 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:35.599100 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:35.652578 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:35.652619 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:38.167582 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:38.182164 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:38.182250 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:38.218993 1076050 cri.go:89] found id: ""
	I0127 15:39:38.219025 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.219034 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:38.219040 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:38.219121 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:38.257547 1076050 cri.go:89] found id: ""
	I0127 15:39:38.257575 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.257584 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:38.257590 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:38.257643 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:38.295251 1076050 cri.go:89] found id: ""
	I0127 15:39:38.295287 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.295299 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:38.295307 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:38.295378 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:38.339567 1076050 cri.go:89] found id: ""
	I0127 15:39:38.339605 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.339621 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:38.339629 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:38.339697 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:38.375969 1076050 cri.go:89] found id: ""
	I0127 15:39:38.376007 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.376019 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:38.376028 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:38.376097 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:38.429385 1076050 cri.go:89] found id: ""
	I0127 15:39:38.429416 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.429427 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:38.429435 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:38.429503 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:34.587256 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:37.080093 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:39.287413 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:41.287958 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:39.028638 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:41.029306 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:38.481564 1076050 cri.go:89] found id: ""
	I0127 15:39:38.481604 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.481618 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:38.481627 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:38.481700 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:38.535177 1076050 cri.go:89] found id: ""
	I0127 15:39:38.535203 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.535211 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:38.535223 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:38.535238 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:38.549306 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:38.549349 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:38.622573 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:38.622607 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:38.622625 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:38.697323 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:38.697363 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:38.738950 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:38.738981 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:41.298384 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:41.312088 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:41.312162 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:41.349779 1076050 cri.go:89] found id: ""
	I0127 15:39:41.349808 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.349817 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:41.349824 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:41.349887 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:41.387675 1076050 cri.go:89] found id: ""
	I0127 15:39:41.387715 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.387732 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:41.387740 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:41.387797 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:41.424135 1076050 cri.go:89] found id: ""
	I0127 15:39:41.424166 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.424175 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:41.424181 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:41.424246 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:41.464733 1076050 cri.go:89] found id: ""
	I0127 15:39:41.464760 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.464768 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:41.464774 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:41.464835 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:41.506669 1076050 cri.go:89] found id: ""
	I0127 15:39:41.506700 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.506713 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:41.506725 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:41.506793 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:41.548804 1076050 cri.go:89] found id: ""
	I0127 15:39:41.548833 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.548842 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:41.548848 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:41.548911 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:41.590203 1076050 cri.go:89] found id: ""
	I0127 15:39:41.590233 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.590245 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:41.590253 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:41.590318 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:41.625407 1076050 cri.go:89] found id: ""
	I0127 15:39:41.625434 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.625442 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:41.625452 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:41.625466 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:41.702765 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:41.702808 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:41.745622 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:41.745662 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:41.799894 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:41.799943 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:41.814151 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:41.814180 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:41.899042 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:39.580910 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:41.581608 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:43.587620 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:43.787400 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:45.787456 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:43.529161 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:46.028736 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:44.399328 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:44.420663 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:44.420731 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:44.484562 1076050 cri.go:89] found id: ""
	I0127 15:39:44.484595 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.484606 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:44.484616 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:44.484681 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:44.555635 1076050 cri.go:89] found id: ""
	I0127 15:39:44.555663 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.555672 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:44.555678 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:44.555730 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:44.598564 1076050 cri.go:89] found id: ""
	I0127 15:39:44.598592 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.598600 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:44.598606 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:44.598663 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:44.639072 1076050 cri.go:89] found id: ""
	I0127 15:39:44.639115 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.639126 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:44.639134 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:44.639200 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:44.677620 1076050 cri.go:89] found id: ""
	I0127 15:39:44.677652 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.677662 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:44.677670 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:44.677730 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:44.714227 1076050 cri.go:89] found id: ""
	I0127 15:39:44.714263 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.714273 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:44.714281 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:44.714357 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:44.753864 1076050 cri.go:89] found id: ""
	I0127 15:39:44.753898 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.753911 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:44.753919 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:44.753987 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:44.790576 1076050 cri.go:89] found id: ""
	I0127 15:39:44.790603 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.790613 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:44.790625 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:44.790641 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:44.864427 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:44.864468 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:44.904955 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:44.904989 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:44.959074 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:44.959137 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:44.976053 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:44.976082 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:45.062578 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:47.562901 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:47.576665 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:47.576751 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:47.615806 1076050 cri.go:89] found id: ""
	I0127 15:39:47.615842 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.615855 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:47.615864 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:47.615936 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:47.651913 1076050 cri.go:89] found id: ""
	I0127 15:39:47.651947 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.651966 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:47.651974 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:47.652045 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:47.688572 1076050 cri.go:89] found id: ""
	I0127 15:39:47.688604 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.688614 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:47.688620 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:47.688680 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:47.726688 1076050 cri.go:89] found id: ""
	I0127 15:39:47.726725 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.726737 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:47.726745 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:47.726815 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:47.768385 1076050 cri.go:89] found id: ""
	I0127 15:39:47.768413 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.768424 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:47.768433 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:47.768493 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:47.806575 1076050 cri.go:89] found id: ""
	I0127 15:39:47.806601 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.806609 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:47.806615 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:47.806668 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:47.843234 1076050 cri.go:89] found id: ""
	I0127 15:39:47.843259 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.843267 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:47.843273 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:47.843325 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:47.882360 1076050 cri.go:89] found id: ""
	I0127 15:39:47.882398 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.882411 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:47.882426 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:47.882445 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:47.936678 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:47.936721 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:47.951861 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:47.951889 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:48.027451 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:48.027479 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:48.027497 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:48.110314 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:48.110362 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:46.079379 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:48.081369 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:47.788330 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:50.288398 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:52.281192 1074659 pod_ready.go:82] duration metric: took 4m0.000550048s for pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace to be "Ready" ...
	E0127 15:39:52.281240 1074659 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 15:39:52.281264 1074659 pod_ready.go:39] duration metric: took 4m13.057238138s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:39:52.281309 1074659 kubeadm.go:597] duration metric: took 4m21.316884653s to restartPrimaryControlPlane
	W0127 15:39:52.281435 1074659 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 15:39:52.281477 1074659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:39:48.029038 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:50.529674 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:50.653993 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:50.668077 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:50.668150 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:50.708132 1076050 cri.go:89] found id: ""
	I0127 15:39:50.708160 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.708168 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:50.708175 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:50.708244 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:50.748371 1076050 cri.go:89] found id: ""
	I0127 15:39:50.748400 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.748409 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:50.748415 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:50.748471 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:50.785148 1076050 cri.go:89] found id: ""
	I0127 15:39:50.785183 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.785194 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:50.785202 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:50.785267 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:50.820827 1076050 cri.go:89] found id: ""
	I0127 15:39:50.820864 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.820874 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:50.820881 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:50.820948 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:50.859566 1076050 cri.go:89] found id: ""
	I0127 15:39:50.859602 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.859615 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:50.859623 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:50.859699 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:50.896227 1076050 cri.go:89] found id: ""
	I0127 15:39:50.896263 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.896276 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:50.896285 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:50.896352 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:50.933357 1076050 cri.go:89] found id: ""
	I0127 15:39:50.933393 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.933405 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:50.933414 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:50.933478 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:50.968264 1076050 cri.go:89] found id: ""
	I0127 15:39:50.968303 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.968313 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:50.968324 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:50.968338 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:51.026708 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:51.026754 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:51.041436 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:51.041475 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:51.110945 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:51.110967 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:51.110980 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:51.192815 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:51.192858 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:50.581346 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:53.080934 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:52.529918 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:55.028235 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:57.029052 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:53.737031 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:53.751175 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:53.751266 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:53.793720 1076050 cri.go:89] found id: ""
	I0127 15:39:53.793748 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.793757 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:53.793764 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:53.793822 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:53.832993 1076050 cri.go:89] found id: ""
	I0127 15:39:53.833065 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.833074 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:53.833080 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:53.833139 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:53.872089 1076050 cri.go:89] found id: ""
	I0127 15:39:53.872122 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.872133 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:53.872147 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:53.872205 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:53.914262 1076050 cri.go:89] found id: ""
	I0127 15:39:53.914298 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.914311 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:53.914321 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:53.914400 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:53.954035 1076050 cri.go:89] found id: ""
	I0127 15:39:53.954073 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.954085 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:53.954093 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:53.954158 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:53.994248 1076050 cri.go:89] found id: ""
	I0127 15:39:53.994306 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.994320 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:53.994329 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:53.994407 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:54.031811 1076050 cri.go:89] found id: ""
	I0127 15:39:54.031836 1076050 logs.go:282] 0 containers: []
	W0127 15:39:54.031847 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:54.031855 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:54.031917 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:54.070159 1076050 cri.go:89] found id: ""
	I0127 15:39:54.070199 1076050 logs.go:282] 0 containers: []
	W0127 15:39:54.070212 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:54.070225 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:54.070242 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:54.112540 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:54.112575 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:54.163657 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:54.163710 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:54.178720 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:54.178757 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:54.255558 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:54.255596 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:54.255613 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:56.834676 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:56.848186 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:56.848265 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:56.885958 1076050 cri.go:89] found id: ""
	I0127 15:39:56.885984 1076050 logs.go:282] 0 containers: []
	W0127 15:39:56.885993 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:56.885999 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:56.886050 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:56.925195 1076050 cri.go:89] found id: ""
	I0127 15:39:56.925233 1076050 logs.go:282] 0 containers: []
	W0127 15:39:56.925247 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:56.925256 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:56.925328 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:56.967597 1076050 cri.go:89] found id: ""
	I0127 15:39:56.967631 1076050 logs.go:282] 0 containers: []
	W0127 15:39:56.967644 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:56.967654 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:56.967719 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:57.005973 1076050 cri.go:89] found id: ""
	I0127 15:39:57.006008 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.006021 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:57.006029 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:57.006104 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:57.042547 1076050 cri.go:89] found id: ""
	I0127 15:39:57.042581 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.042593 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:57.042601 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:57.042664 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:57.084492 1076050 cri.go:89] found id: ""
	I0127 15:39:57.084517 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.084525 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:57.084531 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:57.084581 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:57.120954 1076050 cri.go:89] found id: ""
	I0127 15:39:57.120988 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.121032 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:57.121039 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:57.121100 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:57.159620 1076050 cri.go:89] found id: ""
	I0127 15:39:57.159657 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.159668 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:57.159681 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:57.159696 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:57.203209 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:57.203245 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:57.253929 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:57.253972 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:57.268430 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:57.268463 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:57.338716 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:57.338741 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:57.338760 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:55.082397 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:57.581203 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:59.528435 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:01.530232 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:59.918299 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:59.933577 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:59.933650 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:59.971396 1076050 cri.go:89] found id: ""
	I0127 15:39:59.971437 1076050 logs.go:282] 0 containers: []
	W0127 15:39:59.971449 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:59.971457 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:59.971516 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:00.012852 1076050 cri.go:89] found id: ""
	I0127 15:40:00.012890 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.012902 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:00.012910 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:00.012983 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:00.053636 1076050 cri.go:89] found id: ""
	I0127 15:40:00.053673 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.053685 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:00.053693 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:00.053757 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:00.091584 1076050 cri.go:89] found id: ""
	I0127 15:40:00.091615 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.091626 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:00.091634 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:00.091698 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:00.126906 1076050 cri.go:89] found id: ""
	I0127 15:40:00.126936 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.126945 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:00.126957 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:00.127012 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:00.164308 1076050 cri.go:89] found id: ""
	I0127 15:40:00.164345 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.164354 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:00.164360 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:00.164412 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:00.200695 1076050 cri.go:89] found id: ""
	I0127 15:40:00.200727 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.200739 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:00.200750 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:00.200807 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:00.239910 1076050 cri.go:89] found id: ""
	I0127 15:40:00.239938 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.239947 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:00.239958 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:00.239970 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:00.255441 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:00.255468 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:00.333737 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:00.333767 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:00.333782 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:00.417199 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:00.417256 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:00.461683 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:00.461711 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:03.016318 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:03.033626 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:03.033707 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:03.070895 1076050 cri.go:89] found id: ""
	I0127 15:40:03.070929 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.070940 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:03.070948 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:03.071011 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:03.107691 1076050 cri.go:89] found id: ""
	I0127 15:40:03.107725 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.107736 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:03.107742 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:03.107806 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:03.144989 1076050 cri.go:89] found id: ""
	I0127 15:40:03.145032 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.145044 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:03.145052 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:03.145106 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:03.182441 1076050 cri.go:89] found id: ""
	I0127 15:40:03.182473 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.182482 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:03.182488 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:03.182540 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:03.220251 1076050 cri.go:89] found id: ""
	I0127 15:40:03.220286 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.220298 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:03.220306 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:03.220366 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:03.258761 1076050 cri.go:89] found id: ""
	I0127 15:40:03.258799 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.258810 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:03.258818 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:03.258888 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:03.307236 1076050 cri.go:89] found id: ""
	I0127 15:40:03.307274 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.307283 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:03.307289 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:03.307352 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:03.354451 1076050 cri.go:89] found id: ""
	I0127 15:40:03.354487 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.354498 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:03.354509 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:03.354524 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:03.405369 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:03.405412 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:03.420837 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:03.420866 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 15:40:00.081973 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:02.581659 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:04.030283 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:06.529988 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	W0127 15:40:03.496384 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:03.496420 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:03.496435 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:03.576992 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:03.577066 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:06.128185 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:06.142266 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:06.142381 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:06.181053 1076050 cri.go:89] found id: ""
	I0127 15:40:06.181087 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.181097 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:06.181106 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:06.181162 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:06.218206 1076050 cri.go:89] found id: ""
	I0127 15:40:06.218236 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.218245 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:06.218251 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:06.218304 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:06.255094 1076050 cri.go:89] found id: ""
	I0127 15:40:06.255138 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.255158 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:06.255165 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:06.255221 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:06.295564 1076050 cri.go:89] found id: ""
	I0127 15:40:06.295598 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.295611 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:06.295620 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:06.295683 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:06.332518 1076050 cri.go:89] found id: ""
	I0127 15:40:06.332552 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.332561 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:06.332568 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:06.332641 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:06.371503 1076050 cri.go:89] found id: ""
	I0127 15:40:06.371532 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.371540 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:06.371547 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:06.371599 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:06.409091 1076050 cri.go:89] found id: ""
	I0127 15:40:06.409119 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.409128 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:06.409135 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:06.409192 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:06.445033 1076050 cri.go:89] found id: ""
	I0127 15:40:06.445078 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.445092 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:06.445113 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:06.445132 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:06.460284 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:06.460321 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:06.543807 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:06.543831 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:06.543844 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:06.626884 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:06.626929 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:06.670309 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:06.670350 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:05.075392 1074908 pod_ready.go:82] duration metric: took 4m0.001148212s for pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace to be "Ready" ...
	E0127 15:40:05.075435 1074908 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 15:40:05.075460 1074908 pod_ready.go:39] duration metric: took 4m14.604653981s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:05.075504 1074908 kubeadm.go:597] duration metric: took 4m23.17285487s to restartPrimaryControlPlane
	W0127 15:40:05.075610 1074908 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 15:40:05.075649 1074908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:40:09.029666 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:11.529388 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:09.219752 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:09.234460 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:09.234537 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:09.271526 1076050 cri.go:89] found id: ""
	I0127 15:40:09.271574 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.271584 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:09.271590 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:09.271661 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:09.312643 1076050 cri.go:89] found id: ""
	I0127 15:40:09.312681 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.312696 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:09.312705 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:09.312771 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:09.351697 1076050 cri.go:89] found id: ""
	I0127 15:40:09.351736 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.351749 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:09.351757 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:09.351825 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:09.390289 1076050 cri.go:89] found id: ""
	I0127 15:40:09.390315 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.390324 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:09.390332 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:09.390400 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:09.431515 1076050 cri.go:89] found id: ""
	I0127 15:40:09.431548 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.431559 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:09.431567 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:09.431634 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:09.473134 1076050 cri.go:89] found id: ""
	I0127 15:40:09.473170 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.473182 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:09.473190 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:09.473261 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:09.516505 1076050 cri.go:89] found id: ""
	I0127 15:40:09.516542 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.516556 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:09.516564 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:09.516634 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:09.560596 1076050 cri.go:89] found id: ""
	I0127 15:40:09.560638 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.560649 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:09.560662 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:09.560678 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:09.616174 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:09.616219 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:09.631586 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:09.631622 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:09.706642 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:09.706677 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:09.706696 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:09.780834 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:09.780883 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:12.323632 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:12.337043 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:12.337121 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:12.371851 1076050 cri.go:89] found id: ""
	I0127 15:40:12.371875 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.371884 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:12.371891 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:12.371963 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:12.409962 1076050 cri.go:89] found id: ""
	I0127 15:40:12.409997 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.410010 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:12.410018 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:12.410095 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:12.445440 1076050 cri.go:89] found id: ""
	I0127 15:40:12.445473 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.445482 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:12.445489 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:12.445544 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:12.481239 1076050 cri.go:89] found id: ""
	I0127 15:40:12.481270 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.481282 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:12.481303 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:12.481372 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:12.520832 1076050 cri.go:89] found id: ""
	I0127 15:40:12.520859 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.520867 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:12.520873 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:12.520923 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:12.559781 1076050 cri.go:89] found id: ""
	I0127 15:40:12.559818 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.559829 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:12.559838 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:12.559901 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:12.597821 1076050 cri.go:89] found id: ""
	I0127 15:40:12.597861 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.597873 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:12.597882 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:12.597944 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:12.635939 1076050 cri.go:89] found id: ""
	I0127 15:40:12.635974 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.635986 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:12.635998 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:12.636013 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:12.709126 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:12.709150 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:12.709163 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:12.792573 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:12.792617 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:12.832327 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:12.832368 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:12.884984 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:12.885039 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:14.028951 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:16.029783 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:15.401225 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:15.415906 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:15.415993 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:15.457989 1076050 cri.go:89] found id: ""
	I0127 15:40:15.458021 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.458031 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:15.458038 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:15.458100 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:15.493789 1076050 cri.go:89] found id: ""
	I0127 15:40:15.493836 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.493852 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:15.493860 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:15.493927 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:15.535193 1076050 cri.go:89] found id: ""
	I0127 15:40:15.535219 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.535227 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:15.535233 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:15.535298 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:15.574983 1076050 cri.go:89] found id: ""
	I0127 15:40:15.575016 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.575030 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:15.575036 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:15.575107 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:15.613038 1076050 cri.go:89] found id: ""
	I0127 15:40:15.613072 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.613083 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:15.613091 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:15.613166 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:15.651439 1076050 cri.go:89] found id: ""
	I0127 15:40:15.651473 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.651483 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:15.651489 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:15.651559 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:15.697895 1076050 cri.go:89] found id: ""
	I0127 15:40:15.697933 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.697945 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:15.697953 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:15.698026 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:15.736368 1076050 cri.go:89] found id: ""
	I0127 15:40:15.736397 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.736405 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:15.736416 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:15.736431 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:15.788954 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:15.789002 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:15.803162 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:15.803193 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:15.878504 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:15.878538 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:15.878557 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:15.955134 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:15.955186 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:20.131059 1074659 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.849552205s)
	I0127 15:40:20.131159 1074659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:40:20.154965 1074659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:40:20.170718 1074659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:40:20.182783 1074659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:40:20.182813 1074659 kubeadm.go:157] found existing configuration files:
	
	I0127 15:40:20.182879 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:40:20.196772 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:40:20.196838 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:40:20.219107 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:40:20.231548 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:40:20.231633 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:40:20.243226 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:40:20.262500 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:40:20.262565 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:40:20.273568 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:40:20.283606 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:40:20.283675 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:40:20.294389 1074659 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:40:20.475280 1074659 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:40:18.529412 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:21.029561 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:18.497724 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:18.519382 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:18.519463 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:18.556458 1076050 cri.go:89] found id: ""
	I0127 15:40:18.556495 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.556504 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:18.556511 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:18.556566 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:18.593672 1076050 cri.go:89] found id: ""
	I0127 15:40:18.593700 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.593717 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:18.593726 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:18.593794 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:18.632353 1076050 cri.go:89] found id: ""
	I0127 15:40:18.632393 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.632404 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:18.632412 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:18.632467 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:18.668613 1076050 cri.go:89] found id: ""
	I0127 15:40:18.668647 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.668659 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:18.668668 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:18.668738 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:18.706751 1076050 cri.go:89] found id: ""
	I0127 15:40:18.706786 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.706798 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:18.706806 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:18.706872 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:18.745670 1076050 cri.go:89] found id: ""
	I0127 15:40:18.745706 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.745719 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:18.745728 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:18.745798 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:18.783666 1076050 cri.go:89] found id: ""
	I0127 15:40:18.783696 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.783708 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:18.783716 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:18.783783 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:18.821591 1076050 cri.go:89] found id: ""
	I0127 15:40:18.821626 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.821637 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:18.821652 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:18.821669 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:18.895554 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:18.895582 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:18.895600 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:18.977366 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:18.977416 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:19.020341 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:19.020374 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:19.073493 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:19.073537 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:21.589182 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:21.607125 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:21.607245 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:21.654887 1076050 cri.go:89] found id: ""
	I0127 15:40:21.654922 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.654933 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:21.654942 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:21.655013 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:21.703233 1076050 cri.go:89] found id: ""
	I0127 15:40:21.703279 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.703289 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:21.703298 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:21.703440 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:21.744227 1076050 cri.go:89] found id: ""
	I0127 15:40:21.744260 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.744273 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:21.744286 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:21.744356 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:21.786397 1076050 cri.go:89] found id: ""
	I0127 15:40:21.786430 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.786445 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:21.786454 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:21.786517 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:21.831934 1076050 cri.go:89] found id: ""
	I0127 15:40:21.831963 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.831974 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:21.831980 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:21.832036 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:21.877230 1076050 cri.go:89] found id: ""
	I0127 15:40:21.877264 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.877275 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:21.877283 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:21.877351 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:21.923993 1076050 cri.go:89] found id: ""
	I0127 15:40:21.924026 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.924038 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:21.924047 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:21.924109 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:21.963890 1076050 cri.go:89] found id: ""
	I0127 15:40:21.963922 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.963931 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:21.963942 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:21.963958 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:22.010706 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:22.010743 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:22.070053 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:22.070096 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:22.085574 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:22.085604 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:22.163198 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:22.163228 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:22.163245 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:23.031094 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:24.523077 1075160 pod_ready.go:82] duration metric: took 4m0.001138229s for pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace to be "Ready" ...
	E0127 15:40:24.523130 1075160 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 15:40:24.523156 1075160 pod_ready.go:39] duration metric: took 4m14.040193884s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:24.523186 1075160 kubeadm.go:597] duration metric: took 4m21.511137654s to restartPrimaryControlPlane
	W0127 15:40:24.523251 1075160 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 15:40:24.523280 1075160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:40:24.747046 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:24.761103 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:24.761194 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:24.806570 1076050 cri.go:89] found id: ""
	I0127 15:40:24.806659 1076050 logs.go:282] 0 containers: []
	W0127 15:40:24.806679 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:24.806689 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:24.806755 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:24.854651 1076050 cri.go:89] found id: ""
	I0127 15:40:24.854684 1076050 logs.go:282] 0 containers: []
	W0127 15:40:24.854697 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:24.854705 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:24.854773 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:24.915668 1076050 cri.go:89] found id: ""
	I0127 15:40:24.915705 1076050 logs.go:282] 0 containers: []
	W0127 15:40:24.915718 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:24.915728 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:24.915794 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:24.975570 1076050 cri.go:89] found id: ""
	I0127 15:40:24.975610 1076050 logs.go:282] 0 containers: []
	W0127 15:40:24.975623 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:24.975632 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:24.975704 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:25.025853 1076050 cri.go:89] found id: ""
	I0127 15:40:25.025885 1076050 logs.go:282] 0 containers: []
	W0127 15:40:25.025896 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:25.025903 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:25.025980 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:25.064940 1076050 cri.go:89] found id: ""
	I0127 15:40:25.064976 1076050 logs.go:282] 0 containers: []
	W0127 15:40:25.064987 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:25.064996 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:25.065082 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:25.110507 1076050 cri.go:89] found id: ""
	I0127 15:40:25.110539 1076050 logs.go:282] 0 containers: []
	W0127 15:40:25.110549 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:25.110558 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:25.110622 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:25.150241 1076050 cri.go:89] found id: ""
	I0127 15:40:25.150288 1076050 logs.go:282] 0 containers: []
	W0127 15:40:25.150299 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:25.150313 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:25.150330 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:25.243205 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:25.243238 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:25.243255 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:25.323856 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:25.323900 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:25.367207 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:25.367245 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:25.429072 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:25.429120 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:27.945904 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:27.959618 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:27.959708 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:27.999655 1076050 cri.go:89] found id: ""
	I0127 15:40:27.999685 1076050 logs.go:282] 0 containers: []
	W0127 15:40:27.999697 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:27.999705 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:27.999768 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:28.039662 1076050 cri.go:89] found id: ""
	I0127 15:40:28.039695 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.039708 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:28.039716 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:28.039786 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:28.075418 1076050 cri.go:89] found id: ""
	I0127 15:40:28.075451 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.075462 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:28.075472 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:28.075542 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:28.114964 1076050 cri.go:89] found id: ""
	I0127 15:40:28.115023 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.115036 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:28.115045 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:28.115106 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:28.153086 1076050 cri.go:89] found id: ""
	I0127 15:40:28.153115 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.153126 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:28.153135 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:28.153198 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:28.189564 1076050 cri.go:89] found id: ""
	I0127 15:40:28.189597 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.189607 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:28.189623 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:28.189680 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:28.228037 1076050 cri.go:89] found id: ""
	I0127 15:40:28.228067 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.228076 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:28.228083 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:28.228163 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:28.277124 1076050 cri.go:89] found id: ""
	I0127 15:40:28.277155 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.277168 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:28.277179 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:28.277192 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:28.340183 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:28.340231 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:28.356822 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:28.356854 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:28.428923 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:28.428951 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:28.428968 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:28.833666 1074659 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 15:40:28.833746 1074659 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:40:28.833840 1074659 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:40:28.833927 1074659 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:40:28.834008 1074659 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 15:40:28.834082 1074659 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:40:28.835576 1074659 out.go:235]   - Generating certificates and keys ...
	I0127 15:40:28.835644 1074659 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:40:28.835701 1074659 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:40:28.835776 1074659 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:40:28.835840 1074659 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:40:28.835918 1074659 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:40:28.835984 1074659 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:40:28.836079 1074659 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:40:28.836170 1074659 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:40:28.836279 1074659 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:40:28.836382 1074659 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:40:28.836440 1074659 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:40:28.836506 1074659 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:40:28.836564 1074659 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:40:28.836645 1074659 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 15:40:28.836728 1074659 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:40:28.836800 1074659 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:40:28.836889 1074659 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:40:28.836973 1074659 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:40:28.837079 1074659 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:40:28.838668 1074659 out.go:235]   - Booting up control plane ...
	I0127 15:40:28.838772 1074659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:40:28.838882 1074659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:40:28.838967 1074659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:40:28.839120 1074659 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:40:28.839212 1074659 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:40:28.839261 1074659 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:40:28.839412 1074659 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 15:40:28.839527 1074659 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 15:40:28.839621 1074659 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.133738ms
	I0127 15:40:28.839718 1074659 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 15:40:28.839793 1074659 kubeadm.go:310] [api-check] The API server is healthy after 5.001467165s
	I0127 15:40:28.839883 1074659 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 15:40:28.840019 1074659 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 15:40:28.840098 1074659 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 15:40:28.840257 1074659 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-458006 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 15:40:28.840304 1074659 kubeadm.go:310] [bootstrap-token] Using token: ysn4g1.5k9s54b5xvzc8py2
	I0127 15:40:28.841707 1074659 out.go:235]   - Configuring RBAC rules ...
	I0127 15:40:28.841821 1074659 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 15:40:28.841908 1074659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 15:40:28.842072 1074659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 15:40:28.842254 1074659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 15:40:28.842425 1074659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 15:40:28.842542 1074659 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 15:40:28.842654 1074659 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 15:40:28.842695 1074659 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 15:40:28.842739 1074659 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 15:40:28.842746 1074659 kubeadm.go:310] 
	I0127 15:40:28.842794 1074659 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 15:40:28.842803 1074659 kubeadm.go:310] 
	I0127 15:40:28.842866 1074659 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 15:40:28.842878 1074659 kubeadm.go:310] 
	I0127 15:40:28.842923 1074659 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 15:40:28.843010 1074659 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 15:40:28.843103 1074659 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 15:40:28.843112 1074659 kubeadm.go:310] 
	I0127 15:40:28.843207 1074659 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 15:40:28.843222 1074659 kubeadm.go:310] 
	I0127 15:40:28.843297 1074659 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 15:40:28.843312 1074659 kubeadm.go:310] 
	I0127 15:40:28.843389 1074659 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 15:40:28.843486 1074659 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 15:40:28.843560 1074659 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 15:40:28.843568 1074659 kubeadm.go:310] 
	I0127 15:40:28.843641 1074659 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 15:40:28.843710 1074659 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 15:40:28.843716 1074659 kubeadm.go:310] 
	I0127 15:40:28.843788 1074659 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ysn4g1.5k9s54b5xvzc8py2 \
	I0127 15:40:28.843875 1074659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 \
	I0127 15:40:28.843899 1074659 kubeadm.go:310] 	--control-plane 
	I0127 15:40:28.843908 1074659 kubeadm.go:310] 
	I0127 15:40:28.844015 1074659 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 15:40:28.844024 1074659 kubeadm.go:310] 
	I0127 15:40:28.844090 1074659 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ysn4g1.5k9s54b5xvzc8py2 \
	I0127 15:40:28.844200 1074659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 
	I0127 15:40:28.844221 1074659 cni.go:84] Creating CNI manager for ""
	I0127 15:40:28.844233 1074659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:40:28.845800 1074659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 15:40:28.847251 1074659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 15:40:28.858165 1074659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 15:40:28.881328 1074659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 15:40:28.881400 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:28.881455 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-458006 minikube.k8s.io/updated_at=2025_01_27T15_40_28_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743 minikube.k8s.io/name=no-preload-458006 minikube.k8s.io/primary=true
	I0127 15:40:28.897996 1074659 ops.go:34] apiserver oom_adj: -16
	I0127 15:40:29.095553 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:29.596344 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:30.096320 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:30.596512 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:31.096689 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:31.596534 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:32.096361 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:32.595892 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:33.095702 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:33.238790 1074659 kubeadm.go:1113] duration metric: took 4.357463541s to wait for elevateKubeSystemPrivileges
	I0127 15:40:33.238848 1074659 kubeadm.go:394] duration metric: took 5m2.327511742s to StartCluster
	I0127 15:40:33.238888 1074659 settings.go:142] acquiring lock: {Name:mk76722a8563186e0a733747d866232f054026c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:40:33.239099 1074659 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:40:33.240861 1074659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:40:33.241710 1074659 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.30 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 15:40:33.241765 1074659 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 15:40:33.241896 1074659 addons.go:69] Setting storage-provisioner=true in profile "no-preload-458006"
	I0127 15:40:33.241924 1074659 addons.go:238] Setting addon storage-provisioner=true in "no-preload-458006"
	W0127 15:40:33.241936 1074659 addons.go:247] addon storage-provisioner should already be in state true
	I0127 15:40:33.241970 1074659 config.go:182] Loaded profile config "no-preload-458006": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:40:33.241993 1074659 host.go:66] Checking if "no-preload-458006" exists ...
	I0127 15:40:33.242098 1074659 addons.go:69] Setting default-storageclass=true in profile "no-preload-458006"
	I0127 15:40:33.242136 1074659 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-458006"
	I0127 15:40:33.242491 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.242558 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.242562 1074659 addons.go:69] Setting dashboard=true in profile "no-preload-458006"
	I0127 15:40:33.242579 1074659 addons.go:238] Setting addon dashboard=true in "no-preload-458006"
	W0127 15:40:33.242587 1074659 addons.go:247] addon dashboard should already be in state true
	I0127 15:40:33.242619 1074659 host.go:66] Checking if "no-preload-458006" exists ...
	I0127 15:40:33.242642 1074659 addons.go:69] Setting metrics-server=true in profile "no-preload-458006"
	I0127 15:40:33.242681 1074659 addons.go:238] Setting addon metrics-server=true in "no-preload-458006"
	W0127 15:40:33.242703 1074659 addons.go:247] addon metrics-server should already be in state true
	I0127 15:40:33.242748 1074659 host.go:66] Checking if "no-preload-458006" exists ...
	I0127 15:40:33.242982 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.243002 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.243017 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.243038 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.243162 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.243195 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.246220 1074659 out.go:177] * Verifying Kubernetes components...
	I0127 15:40:33.247844 1074659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:40:33.260866 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42951
	I0127 15:40:33.260900 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35127
	I0127 15:40:33.260867 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I0127 15:40:33.261687 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.261705 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.261805 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39555
	I0127 15:40:33.262293 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.262298 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.262311 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.262320 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.262394 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.262663 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.262770 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.262824 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.262973 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.262988 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.263265 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.263294 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.263301 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.263705 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.263777 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.263793 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.264103 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.264138 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.264160 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.265173 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.265220 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.266841 1074659 addons.go:238] Setting addon default-storageclass=true in "no-preload-458006"
	W0127 15:40:33.266861 1074659 addons.go:247] addon default-storageclass should already be in state true
	I0127 15:40:33.266882 1074659 host.go:66] Checking if "no-preload-458006" exists ...
	I0127 15:40:33.267142 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.267186 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.284237 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34477
	I0127 15:40:33.284787 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.285432 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.285458 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.285817 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.286054 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.288006 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:40:33.288915 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36501
	I0127 15:40:33.289278 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44703
	I0127 15:40:33.289464 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.289551 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.290021 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.290033 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.290128 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.290135 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.290430 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.290487 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.290488 1074659 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 15:40:33.290680 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.290956 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.293313 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:40:33.293608 1074659 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 15:40:33.293756 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:40:33.295556 1074659 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 15:40:33.295557 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 15:40:33.295679 1074659 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 15:40:33.295688 1074659 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:40:32.977057 1074908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.901370931s)
	I0127 15:40:32.977156 1074908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:40:32.998093 1074908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:40:33.014544 1074908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:40:33.041108 1074908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:40:33.041138 1074908 kubeadm.go:157] found existing configuration files:
	
	I0127 15:40:33.041203 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:40:33.058390 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:40:33.058462 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:40:33.070074 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:40:33.087447 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:40:33.087524 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:40:33.101890 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:40:33.112384 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:40:33.112460 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:40:33.122774 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:40:33.133115 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:40:33.133183 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:40:33.143719 1074908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:40:33.201432 1074908 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 15:40:33.201519 1074908 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:40:33.371439 1074908 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:40:33.371619 1074908 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:40:33.371746 1074908 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 15:40:33.380800 1074908 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:40:28.505128 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:28.505170 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:31.047029 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:31.060582 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:31.060685 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:31.097127 1076050 cri.go:89] found id: ""
	I0127 15:40:31.097150 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.097160 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:31.097168 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:31.097230 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:31.134764 1076050 cri.go:89] found id: ""
	I0127 15:40:31.134799 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.134810 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:31.134818 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:31.134900 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:31.174779 1076050 cri.go:89] found id: ""
	I0127 15:40:31.174807 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.174816 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:31.174822 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:31.174875 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:31.215471 1076050 cri.go:89] found id: ""
	I0127 15:40:31.215503 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.215513 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:31.215519 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:31.215572 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:31.253765 1076050 cri.go:89] found id: ""
	I0127 15:40:31.253796 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.253804 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:31.253811 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:31.253867 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:31.297130 1076050 cri.go:89] found id: ""
	I0127 15:40:31.297161 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.297170 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:31.297176 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:31.297240 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:31.335280 1076050 cri.go:89] found id: ""
	I0127 15:40:31.335315 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.335326 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:31.335334 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:31.335406 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:31.372619 1076050 cri.go:89] found id: ""
	I0127 15:40:31.372652 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.372664 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:31.372678 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:31.372693 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:31.427666 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:31.427709 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:31.442810 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:31.442842 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:31.511297 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:31.511330 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:31.511354 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:31.595122 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:31.595168 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:33.383521 1074908 out.go:235]   - Generating certificates and keys ...
	I0127 15:40:33.383651 1074908 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:40:33.383757 1074908 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:40:33.383895 1074908 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:40:33.383985 1074908 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:40:33.384074 1074908 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:40:33.384147 1074908 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:40:33.384245 1074908 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:40:33.384323 1074908 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:40:33.384413 1074908 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:40:33.384510 1074908 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:40:33.384563 1074908 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:40:33.384642 1074908 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:40:33.553965 1074908 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:40:33.739507 1074908 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 15:40:33.994637 1074908 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:40:34.154265 1074908 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:40:34.373069 1074908 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:40:34.373791 1074908 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:40:34.379843 1074908 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:40:33.295709 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:40:33.297475 1074659 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 15:40:33.297501 1074659 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 15:40:33.297523 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:40:33.300714 1074659 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:40:33.300736 1074659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 15:40:33.300756 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:40:33.301635 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38107
	I0127 15:40:33.302333 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.302863 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.302880 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.303349 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.303970 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.304013 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.305284 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.305834 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:40:33.305864 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.306025 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:40:33.306086 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.306246 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:40:33.306406 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:40:33.306488 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.306592 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:40:33.309540 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:40:33.309565 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.309810 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:40:33.310021 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:40:33.310146 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:40:33.310163 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.310320 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:40:33.310404 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:40:33.310566 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:40:33.310593 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:40:33.310786 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:40:33.310945 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:40:33.329960 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0127 15:40:33.330745 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.331477 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.331497 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.331931 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.332248 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.334148 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:40:33.337343 1074659 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 15:40:33.337364 1074659 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 15:40:33.337387 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:40:33.344679 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.345163 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:40:33.345261 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.345521 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:40:33.345738 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:40:33.345938 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:40:33.346117 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:40:33.464899 1074659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:40:33.489798 1074659 node_ready.go:35] waiting up to 6m0s for node "no-preload-458006" to be "Ready" ...
	I0127 15:40:33.523407 1074659 node_ready.go:49] node "no-preload-458006" has status "Ready":"True"
	I0127 15:40:33.523440 1074659 node_ready.go:38] duration metric: took 33.61111ms for node "no-preload-458006" to be "Ready" ...
	I0127 15:40:33.523453 1074659 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:33.535257 1074659 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:33.568512 1074659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 15:40:33.587974 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 15:40:33.588003 1074659 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 15:40:33.619075 1074659 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 15:40:33.619099 1074659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 15:40:33.633023 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 15:40:33.633068 1074659 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 15:40:33.642970 1074659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:40:33.657566 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 15:40:33.657595 1074659 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 15:40:33.664558 1074659 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 15:40:33.664588 1074659 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 15:40:33.687856 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 15:40:33.687883 1074659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 15:40:33.714005 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 15:40:33.714036 1074659 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 15:40:33.727527 1074659 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:40:33.727554 1074659 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 15:40:33.764439 1074659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:40:33.790606 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 15:40:33.790639 1074659 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 15:40:33.826641 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:33.826674 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:33.827044 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:33.827065 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:33.827075 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:33.827083 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:33.827331 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:33.827363 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:33.827373 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:33.834226 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:33.834269 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:33.834561 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:33.834578 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:33.867815 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 15:40:33.867848 1074659 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 15:40:33.891318 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 15:40:33.891362 1074659 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 15:40:33.964578 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:40:33.964616 1074659 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 15:40:34.002418 1074659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:40:34.279743 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:34.279829 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:34.280331 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:34.280397 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:34.280425 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:34.280447 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:34.280473 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:34.280769 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:34.280818 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:34.280833 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:34.817958 1074659 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.053479215s)
	I0127 15:40:34.818069 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:34.818092 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:34.818435 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:34.818495 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:34.818509 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:34.818518 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:34.818778 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:34.818799 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:34.818811 1074659 addons.go:479] Verifying addon metrics-server=true in "no-preload-458006"
	I0127 15:40:35.547309 1074659 pod_ready.go:103] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:36.514576 1074659 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.512097478s)
	I0127 15:40:36.514647 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:36.514666 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:36.515033 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:36.515046 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:36.515111 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:36.515130 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:36.515153 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:36.515488 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:36.515527 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:36.515503 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:36.517645 1074659 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-458006 addons enable metrics-server
	
	I0127 15:40:36.519535 1074659 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 15:40:36.520964 1074659 addons.go:514] duration metric: took 3.279215802s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 15:40:34.138287 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:34.156651 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:34.156734 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:34.194604 1076050 cri.go:89] found id: ""
	I0127 15:40:34.194647 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.194658 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:34.194666 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:34.194729 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:34.233299 1076050 cri.go:89] found id: ""
	I0127 15:40:34.233353 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.233363 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:34.233369 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:34.233423 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:34.274424 1076050 cri.go:89] found id: ""
	I0127 15:40:34.274453 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.274465 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:34.274473 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:34.274539 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:34.317113 1076050 cri.go:89] found id: ""
	I0127 15:40:34.317144 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.317155 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:34.317168 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:34.317239 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:34.359212 1076050 cri.go:89] found id: ""
	I0127 15:40:34.359242 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.359252 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:34.359261 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:34.359328 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:34.398773 1076050 cri.go:89] found id: ""
	I0127 15:40:34.398805 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.398824 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:34.398833 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:34.398910 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:34.440053 1076050 cri.go:89] found id: ""
	I0127 15:40:34.440087 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.440099 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:34.440107 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:34.440178 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:34.482908 1076050 cri.go:89] found id: ""
	I0127 15:40:34.482943 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.482959 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:34.482973 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:34.482992 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:34.500178 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:34.500206 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:34.580251 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:34.580279 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:34.580302 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:34.673730 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:34.673772 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:34.720797 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:34.720838 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:37.282487 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:37.300162 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:37.300231 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:37.348753 1076050 cri.go:89] found id: ""
	I0127 15:40:37.348786 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.348798 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:37.348806 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:37.348870 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:37.398630 1076050 cri.go:89] found id: ""
	I0127 15:40:37.398669 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.398681 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:37.398689 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:37.398761 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:37.437030 1076050 cri.go:89] found id: ""
	I0127 15:40:37.437127 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.437155 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:37.437188 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:37.437277 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:37.477745 1076050 cri.go:89] found id: ""
	I0127 15:40:37.477837 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.477855 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:37.477864 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:37.477937 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:37.514259 1076050 cri.go:89] found id: ""
	I0127 15:40:37.514292 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.514302 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:37.514311 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:37.514385 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:37.551313 1076050 cri.go:89] found id: ""
	I0127 15:40:37.551349 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.551359 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:37.551367 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:37.551427 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:37.593740 1076050 cri.go:89] found id: ""
	I0127 15:40:37.593772 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.593783 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:37.593791 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:37.593854 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:37.634133 1076050 cri.go:89] found id: ""
	I0127 15:40:37.634169 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.634181 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:37.634194 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:37.634217 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:37.699046 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:37.699092 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:37.717470 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:37.717512 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:37.791051 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:37.791077 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:37.791106 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:37.882694 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:37.882742 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:34.381325 1074908 out.go:235]   - Booting up control plane ...
	I0127 15:40:34.381471 1074908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:40:34.381579 1074908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:40:34.382092 1074908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:40:34.406494 1074908 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:40:34.413899 1074908 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:40:34.414029 1074908 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:40:34.583151 1074908 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 15:40:34.583269 1074908 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 15:40:35.584905 1074908 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001687336s
	I0127 15:40:35.585033 1074908 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 15:40:40.587681 1074908 kubeadm.go:310] [api-check] The API server is healthy after 5.001284493s
	I0127 15:40:40.610814 1074908 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 15:40:40.631959 1074908 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 15:40:40.691115 1074908 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 15:40:40.691368 1074908 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-349782 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 15:40:40.717976 1074908 kubeadm.go:310] [bootstrap-token] Using token: 2miseq.yzn49d7krpbx0jxu
	I0127 15:40:40.719603 1074908 out.go:235]   - Configuring RBAC rules ...
	I0127 15:40:40.719764 1074908 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 15:40:40.734536 1074908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 15:40:40.754140 1074908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 15:40:40.763500 1074908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 15:40:40.769897 1074908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 15:40:40.777335 1074908 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 15:40:40.995105 1074908 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 15:40:41.449029 1074908 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 15:40:41.995223 1074908 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 15:40:41.996543 1074908 kubeadm.go:310] 
	I0127 15:40:41.996660 1074908 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 15:40:41.996672 1074908 kubeadm.go:310] 
	I0127 15:40:41.996788 1074908 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 15:40:41.996798 1074908 kubeadm.go:310] 
	I0127 15:40:41.996838 1074908 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 15:40:41.996921 1074908 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 15:40:41.996994 1074908 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 15:40:41.997025 1074908 kubeadm.go:310] 
	I0127 15:40:41.997151 1074908 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 15:40:41.997173 1074908 kubeadm.go:310] 
	I0127 15:40:41.997241 1074908 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 15:40:41.997253 1074908 kubeadm.go:310] 
	I0127 15:40:41.997329 1074908 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 15:40:41.997435 1074908 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 15:40:41.997539 1074908 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 15:40:41.997547 1074908 kubeadm.go:310] 
	I0127 15:40:41.997672 1074908 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 15:40:41.997777 1074908 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 15:40:41.997789 1074908 kubeadm.go:310] 
	I0127 15:40:41.997873 1074908 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2miseq.yzn49d7krpbx0jxu \
	I0127 15:40:41.997954 1074908 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 \
	I0127 15:40:41.997974 1074908 kubeadm.go:310] 	--control-plane 
	I0127 15:40:41.997980 1074908 kubeadm.go:310] 
	I0127 15:40:41.998045 1074908 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 15:40:41.998056 1074908 kubeadm.go:310] 
	I0127 15:40:41.998117 1074908 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2miseq.yzn49d7krpbx0jxu \
	I0127 15:40:41.998204 1074908 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 
	I0127 15:40:41.999397 1074908 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:40:41.999437 1074908 cni.go:84] Creating CNI manager for ""
	I0127 15:40:41.999448 1074908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:40:42.001383 1074908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 15:40:38.042609 1074659 pod_ready.go:103] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:40.046811 1074659 pod_ready.go:103] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:40.431585 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:40.449664 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:40.449766 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:40.500904 1076050 cri.go:89] found id: ""
	I0127 15:40:40.500995 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.501020 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:40.501029 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:40.501103 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:40.543907 1076050 cri.go:89] found id: ""
	I0127 15:40:40.543939 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.543950 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:40.543958 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:40.544018 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:40.592294 1076050 cri.go:89] found id: ""
	I0127 15:40:40.592328 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.592339 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:40.592352 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:40.592418 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:40.641396 1076050 cri.go:89] found id: ""
	I0127 15:40:40.641429 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.641439 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:40.641449 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:40.641522 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:40.687151 1076050 cri.go:89] found id: ""
	I0127 15:40:40.687185 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.687197 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:40.687206 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:40.687279 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:40.728537 1076050 cri.go:89] found id: ""
	I0127 15:40:40.728573 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.728584 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:40.728593 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:40.728666 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:40.770995 1076050 cri.go:89] found id: ""
	I0127 15:40:40.771022 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.771035 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:40.771042 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:40.771108 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:40.818299 1076050 cri.go:89] found id: ""
	I0127 15:40:40.818332 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.818344 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:40.818357 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:40.818379 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:40.835538 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:40.835566 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:40.912785 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:40.912812 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:40.912829 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:41.029124 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:41.029177 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:41.088618 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:41.088649 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:42.002886 1074908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 15:40:42.019774 1074908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 15:40:42.041710 1074908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 15:40:42.041880 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:42.042011 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-349782 minikube.k8s.io/updated_at=2025_01_27T15_40_42_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743 minikube.k8s.io/name=embed-certs-349782 minikube.k8s.io/primary=true
	I0127 15:40:42.071903 1074908 ops.go:34] apiserver oom_adj: -16
	I0127 15:40:42.298644 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:42.799727 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:43.299289 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:43.799485 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:44.299597 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:44.799559 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:45.299631 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:45.388381 1074908 kubeadm.go:1113] duration metric: took 3.346560313s to wait for elevateKubeSystemPrivileges
	I0127 15:40:45.388421 1074908 kubeadm.go:394] duration metric: took 5m3.554845692s to StartCluster
	I0127 15:40:45.388444 1074908 settings.go:142] acquiring lock: {Name:mk76722a8563186e0a733747d866232f054026c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:40:45.388536 1074908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:40:45.390768 1074908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:40:45.391081 1074908 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.43 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 15:40:45.391145 1074908 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 15:40:45.391269 1074908 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-349782"
	I0127 15:40:45.391288 1074908 addons.go:69] Setting dashboard=true in profile "embed-certs-349782"
	I0127 15:40:45.391320 1074908 addons.go:238] Setting addon dashboard=true in "embed-certs-349782"
	I0127 15:40:45.391319 1074908 addons.go:69] Setting metrics-server=true in profile "embed-certs-349782"
	I0127 15:40:45.391294 1074908 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-349782"
	I0127 15:40:45.391334 1074908 config.go:182] Loaded profile config "embed-certs-349782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:40:45.391343 1074908 addons.go:238] Setting addon metrics-server=true in "embed-certs-349782"
	W0127 15:40:45.391353 1074908 addons.go:247] addon metrics-server should already be in state true
	W0127 15:40:45.391330 1074908 addons.go:247] addon dashboard should already be in state true
	W0127 15:40:45.391338 1074908 addons.go:247] addon storage-provisioner should already be in state true
	I0127 15:40:45.391406 1074908 host.go:66] Checking if "embed-certs-349782" exists ...
	I0127 15:40:45.391417 1074908 host.go:66] Checking if "embed-certs-349782" exists ...
	I0127 15:40:45.391276 1074908 addons.go:69] Setting default-storageclass=true in profile "embed-certs-349782"
	I0127 15:40:45.391503 1074908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-349782"
	I0127 15:40:45.391386 1074908 host.go:66] Checking if "embed-certs-349782" exists ...
	I0127 15:40:45.391836 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.391838 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.391876 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.391925 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.391951 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.391954 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.391982 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.392044 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.396751 1074908 out.go:177] * Verifying Kubernetes components...
	I0127 15:40:45.398763 1074908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:40:45.411089 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33133
	I0127 15:40:45.411341 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0127 15:40:45.411740 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.411839 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.412321 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.412348 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.412429 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45519
	I0127 15:40:45.412455 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.412471 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.412710 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.412921 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.413145 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34947
	I0127 15:40:45.413359 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.413399 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.413439 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.413451 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.413623 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.413854 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.413991 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.414216 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.414233 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.414273 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.414298 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.414583 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.414766 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.414772 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.414845 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.418728 1074908 addons.go:238] Setting addon default-storageclass=true in "embed-certs-349782"
	W0127 15:40:45.418755 1074908 addons.go:247] addon default-storageclass should already be in state true
	I0127 15:40:45.418787 1074908 host.go:66] Checking if "embed-certs-349782" exists ...
	I0127 15:40:45.419153 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.419189 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.436563 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41273
	I0127 15:40:45.437032 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.437309 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44033
	I0127 15:40:45.437764 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.437783 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.437859 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38101
	I0127 15:40:45.437986 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.438180 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.438423 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.438439 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.438503 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.438549 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.439042 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.439059 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.439120 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.439496 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.439564 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.440296 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.440349 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.440835 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:40:45.441539 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41231
	I0127 15:40:45.442136 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.442687 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:40:45.443524 1074908 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 15:40:45.443584 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.443599 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.443863 1074908 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 15:40:45.443950 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.444664 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.445476 1074908 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 15:40:45.445498 1074908 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 15:40:45.445531 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:40:45.446460 1074908 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 15:40:45.446697 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:40:45.451306 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 15:40:45.456066 1074908 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 15:40:45.452788 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.456096 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:40:45.454144 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:40:45.456132 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:40:45.456169 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.456379 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:40:45.456396 1074908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:40:42.547331 1074659 pod_ready.go:103] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:44.081830 1074659 pod_ready.go:93] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.081865 1074659 pod_ready.go:82] duration metric: took 10.546579527s for pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.081882 1074659 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-xgx78" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.097962 1074659 pod_ready.go:93] pod "coredns-668d6bf9bc-xgx78" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.097994 1074659 pod_ready.go:82] duration metric: took 16.102725ms for pod "coredns-668d6bf9bc-xgx78" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.098014 1074659 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.117810 1074659 pod_ready.go:93] pod "etcd-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.117845 1074659 pod_ready.go:82] duration metric: took 19.821766ms for pod "etcd-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.117861 1074659 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.147522 1074659 pod_ready.go:93] pod "kube-apiserver-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.147557 1074659 pod_ready.go:82] duration metric: took 29.685956ms for pod "kube-apiserver-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.147573 1074659 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.163535 1074659 pod_ready.go:93] pod "kube-controller-manager-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.163570 1074659 pod_ready.go:82] duration metric: took 15.987018ms for pod "kube-controller-manager-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.163585 1074659 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6j6r5" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.440133 1074659 pod_ready.go:93] pod "kube-proxy-6j6r5" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.440165 1074659 pod_ready.go:82] duration metric: took 276.571766ms for pod "kube-proxy-6j6r5" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.440180 1074659 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.865610 1074659 pod_ready.go:93] pod "kube-scheduler-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.865643 1074659 pod_ready.go:82] duration metric: took 425.453541ms for pod "kube-scheduler-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.865655 1074659 pod_ready.go:39] duration metric: took 11.34218973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:44.865682 1074659 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:40:44.865746 1074659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:44.906758 1074659 api_server.go:72] duration metric: took 11.665005612s to wait for apiserver process to appear ...
	I0127 15:40:44.906793 1074659 api_server.go:88] waiting for apiserver healthz status ...
	I0127 15:40:44.906829 1074659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8443/healthz ...
	I0127 15:40:44.912296 1074659 api_server.go:279] https://192.168.50.30:8443/healthz returned 200:
	ok
	I0127 15:40:44.913396 1074659 api_server.go:141] control plane version: v1.32.1
	I0127 15:40:44.913416 1074659 api_server.go:131] duration metric: took 6.606206ms to wait for apiserver health ...
	I0127 15:40:44.913424 1074659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 15:40:45.045967 1074659 system_pods.go:59] 9 kube-system pods found
	I0127 15:40:45.046012 1074659 system_pods.go:61] "coredns-668d6bf9bc-sp7p4" [7fbb8eca-e2e6-4760-a0b6-8c6387fe9960] Running
	I0127 15:40:45.046020 1074659 system_pods.go:61] "coredns-668d6bf9bc-xgx78" [c3cc3887-d694-4b39-9ad1-c03fcf97b608] Running
	I0127 15:40:45.046025 1074659 system_pods.go:61] "etcd-no-preload-458006" [2474c045-aaa4-4190-8392-3dea1976ded1] Running
	I0127 15:40:45.046031 1074659 system_pods.go:61] "kube-apiserver-no-preload-458006" [2529a3ec-c6a0-4cc7-b93a-7964e435ada0] Running
	I0127 15:40:45.046038 1074659 system_pods.go:61] "kube-controller-manager-no-preload-458006" [989d2483-4dc3-4add-ad64-7f76d4b5c765] Running
	I0127 15:40:45.046043 1074659 system_pods.go:61] "kube-proxy-6j6r5" [3ca06a87-654b-42c2-ac04-12d9b0472973] Running
	I0127 15:40:45.046047 1074659 system_pods.go:61] "kube-scheduler-no-preload-458006" [f6afe797-0eed-4f54-8ed6-fbe75d411b7a] Running
	I0127 15:40:45.046056 1074659 system_pods.go:61] "metrics-server-f79f97bbb-k7879" [137f45e8-cf1d-404b-af06-4b99a257450f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 15:40:45.046063 1074659 system_pods.go:61] "storage-provisioner" [8e874460-b5bf-4ce6-b1ca-9c188b1fd4e6] Running
	I0127 15:40:45.046074 1074659 system_pods.go:74] duration metric: took 132.642132ms to wait for pod list to return data ...
	I0127 15:40:45.046089 1074659 default_sa.go:34] waiting for default service account to be created ...
	I0127 15:40:45.246663 1074659 default_sa.go:45] found service account: "default"
	I0127 15:40:45.246694 1074659 default_sa.go:55] duration metric: took 200.600423ms for default service account to be created ...
	I0127 15:40:45.246707 1074659 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 15:40:45.449871 1074659 system_pods.go:87] 9 kube-system pods found
	I0127 15:40:43.646818 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:43.660154 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:43.660237 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:43.698517 1076050 cri.go:89] found id: ""
	I0127 15:40:43.698548 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.698557 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:43.698563 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:43.698624 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:43.736919 1076050 cri.go:89] found id: ""
	I0127 15:40:43.736954 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.736967 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:43.736978 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:43.737064 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:43.777333 1076050 cri.go:89] found id: ""
	I0127 15:40:43.777369 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.777382 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:43.777391 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:43.777462 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:43.817427 1076050 cri.go:89] found id: ""
	I0127 15:40:43.817460 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.817471 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:43.817480 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:43.817546 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:43.866498 1076050 cri.go:89] found id: ""
	I0127 15:40:43.866527 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.866538 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:43.866546 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:43.866616 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:43.919477 1076050 cri.go:89] found id: ""
	I0127 15:40:43.919510 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.919521 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:43.919530 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:43.919593 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:43.958203 1076050 cri.go:89] found id: ""
	I0127 15:40:43.958242 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.958261 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:43.958270 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:43.958340 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:43.996729 1076050 cri.go:89] found id: ""
	I0127 15:40:43.996760 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.996769 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:43.996779 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:43.996792 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:44.051707 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:44.051748 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:44.069643 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:44.069674 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:44.146464 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:44.146489 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:44.146505 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:44.230654 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:44.230696 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:46.788290 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:46.807855 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:46.807942 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:46.861569 1076050 cri.go:89] found id: ""
	I0127 15:40:46.861596 1076050 logs.go:282] 0 containers: []
	W0127 15:40:46.861608 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:46.861615 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:46.861684 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:46.919686 1076050 cri.go:89] found id: ""
	I0127 15:40:46.919719 1076050 logs.go:282] 0 containers: []
	W0127 15:40:46.919732 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:46.919741 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:46.919810 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:46.959359 1076050 cri.go:89] found id: ""
	I0127 15:40:46.959419 1076050 logs.go:282] 0 containers: []
	W0127 15:40:46.959432 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:46.959440 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:46.959503 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:47.000445 1076050 cri.go:89] found id: ""
	I0127 15:40:47.000489 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.000503 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:47.000512 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:47.000583 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:47.041395 1076050 cri.go:89] found id: ""
	I0127 15:40:47.041426 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.041440 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:47.041449 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:47.041512 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:47.086753 1076050 cri.go:89] found id: ""
	I0127 15:40:47.086787 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.086800 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:47.086808 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:47.086883 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:47.128760 1076050 cri.go:89] found id: ""
	I0127 15:40:47.128788 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.128799 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:47.128807 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:47.128876 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:47.173743 1076050 cri.go:89] found id: ""
	I0127 15:40:47.173779 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.173791 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:47.173804 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:47.173818 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:47.280755 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:47.280817 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:47.343245 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:47.343291 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:47.425229 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:47.425282 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:47.446605 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:47.446649 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:47.563807 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:45.456519 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:40:45.456939 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:40:45.457981 1074908 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:40:45.458002 1074908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 15:40:45.458020 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:40:45.460172 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.460862 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:40:45.460921 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.461259 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:40:45.461487 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:40:45.461715 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:40:45.461874 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:40:45.462195 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.462273 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:40:45.462309 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.462659 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:40:45.462819 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:40:45.462924 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:40:45.463019 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:40:45.464793 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36397
	I0127 15:40:45.465301 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.465795 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.465815 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.468906 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.469208 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.471230 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:40:45.471522 1074908 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 15:40:45.471538 1074908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 15:40:45.471562 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:40:45.474700 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.475171 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:40:45.475203 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.475388 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:40:45.475596 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:40:45.475722 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:40:45.475899 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:40:45.617662 1074908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:40:45.639438 1074908 node_ready.go:35] waiting up to 6m0s for node "embed-certs-349782" to be "Ready" ...
	I0127 15:40:45.668405 1074908 node_ready.go:49] node "embed-certs-349782" has status "Ready":"True"
	I0127 15:40:45.668432 1074908 node_ready.go:38] duration metric: took 28.956722ms for node "embed-certs-349782" to be "Ready" ...
	I0127 15:40:45.668451 1074908 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:45.676760 1074908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:45.743936 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 15:40:45.743967 1074908 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 15:40:45.755731 1074908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:40:45.759201 1074908 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 15:40:45.759233 1074908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 15:40:45.772228 1074908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 15:40:45.805739 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 15:40:45.805773 1074908 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 15:40:45.823459 1074908 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 15:40:45.823500 1074908 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 15:40:45.854823 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 15:40:45.854859 1074908 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 15:40:45.891284 1074908 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:40:45.891327 1074908 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 15:40:45.931396 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 15:40:45.931431 1074908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 15:40:46.015320 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 15:40:46.015360 1074908 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 15:40:46.015364 1074908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:40:46.083527 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 15:40:46.083563 1074908 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 15:40:46.246566 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 15:40:46.246597 1074908 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 15:40:46.376290 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 15:40:46.376329 1074908 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 15:40:46.427597 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:40:46.427631 1074908 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 15:40:46.482003 1074908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:40:47.410166 1074908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.637893772s)
	I0127 15:40:47.410259 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.410166 1074908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.654370109s)
	I0127 15:40:47.410282 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.410349 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.410372 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.410843 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.410875 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.412611 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.412628 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.412638 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.412646 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.412761 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.412798 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.412830 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.412850 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.412903 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.413172 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.413266 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.413342 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.414418 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.414437 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.474683 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.474722 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.475077 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.475151 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.475172 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.777164 1074908 pod_ready.go:103] pod "etcd-embed-certs-349782" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:47.977107 1074908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.961691521s)
	I0127 15:40:47.977187 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.977203 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.977515 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.977556 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.977595 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.977608 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.977619 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.977883 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.977933 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.977955 1074908 addons.go:479] Verifying addon metrics-server=true in "embed-certs-349782"
	I0127 15:40:47.977965 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:49.266293 1074908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.7842336s)
	I0127 15:40:49.266371 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:49.266386 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:49.266731 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:49.266754 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:49.266771 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:49.266779 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:49.267033 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:49.267086 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:49.267106 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:49.268778 1074908 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-349782 addons enable metrics-server
	
	I0127 15:40:49.270188 1074908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 15:40:52.460023 1075160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.936714261s)
	I0127 15:40:52.460128 1075160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:40:52.476845 1075160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:40:52.487966 1075160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:40:52.499961 1075160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:40:52.499988 1075160 kubeadm.go:157] found existing configuration files:
	
	I0127 15:40:52.500037 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 15:40:52.511034 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:40:52.511115 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:40:52.524517 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 15:40:52.534966 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:40:52.535048 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:40:52.545245 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 15:40:52.555070 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:40:52.555149 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:40:52.569605 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 15:40:52.581711 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:40:52.581794 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:40:52.592228 1075160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:40:52.654498 1075160 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 15:40:52.654647 1075160 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:40:52.779741 1075160 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:40:52.779912 1075160 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:40:52.780069 1075160 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 15:40:52.790096 1075160 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:40:50.064460 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:50.080142 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:50.080219 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:50.120604 1076050 cri.go:89] found id: ""
	I0127 15:40:50.120643 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.120655 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:50.120661 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:50.120716 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:50.161728 1076050 cri.go:89] found id: ""
	I0127 15:40:50.161766 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.161777 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:50.161785 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:50.161851 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:50.199247 1076050 cri.go:89] found id: ""
	I0127 15:40:50.199275 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.199286 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:50.199293 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:50.199369 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:50.246623 1076050 cri.go:89] found id: ""
	I0127 15:40:50.246652 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.246663 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:50.246672 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:50.246742 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:50.284077 1076050 cri.go:89] found id: ""
	I0127 15:40:50.284111 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.284123 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:50.284132 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:50.284200 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:50.326481 1076050 cri.go:89] found id: ""
	I0127 15:40:50.326518 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.326530 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:50.326539 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:50.326597 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:50.364165 1076050 cri.go:89] found id: ""
	I0127 15:40:50.364198 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.364210 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:50.364218 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:50.364280 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:50.402527 1076050 cri.go:89] found id: ""
	I0127 15:40:50.402560 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.402572 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:50.402586 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:50.402602 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:50.485370 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:50.485412 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:50.539508 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:50.539547 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:50.591618 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:50.591656 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:50.609824 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:50.609873 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:50.694094 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:53.194813 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:53.211192 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:53.211271 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:53.258010 1076050 cri.go:89] found id: ""
	I0127 15:40:53.258042 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.258060 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:53.258069 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:53.258138 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:53.297402 1076050 cri.go:89] found id: ""
	I0127 15:40:53.297430 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.297440 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:53.297448 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:53.297511 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:53.336412 1076050 cri.go:89] found id: ""
	I0127 15:40:53.336440 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.336450 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:53.336457 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:53.336526 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:53.383904 1076050 cri.go:89] found id: ""
	I0127 15:40:53.383939 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.383950 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:53.383959 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:53.384031 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:53.435476 1076050 cri.go:89] found id: ""
	I0127 15:40:53.435512 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.435525 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:53.435533 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:53.435604 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:49.271495 1074908 addons.go:514] duration metric: took 3.880366443s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 15:40:50.196894 1074908 pod_ready.go:103] pod "etcd-embed-certs-349782" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:51.684593 1074908 pod_ready.go:93] pod "etcd-embed-certs-349782" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:51.684619 1074908 pod_ready.go:82] duration metric: took 6.007831808s for pod "etcd-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:51.684632 1074908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:51.693065 1074908 pod_ready.go:93] pod "kube-apiserver-embed-certs-349782" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:51.693095 1074908 pod_ready.go:82] duration metric: took 8.4536ms for pod "kube-apiserver-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:51.693110 1074908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:52.703593 1074908 pod_ready.go:93] pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:52.703626 1074908 pod_ready.go:82] duration metric: took 1.010507584s for pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:52.703641 1074908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:53.710652 1074908 pod_ready.go:93] pod "kube-scheduler-embed-certs-349782" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:53.710683 1074908 pod_ready.go:82] duration metric: took 1.007031836s for pod "kube-scheduler-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:53.710695 1074908 pod_ready.go:39] duration metric: took 8.042232456s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:53.710716 1074908 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:40:53.710780 1074908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:53.771554 1074908 api_server.go:72] duration metric: took 8.380427434s to wait for apiserver process to appear ...
	I0127 15:40:53.771585 1074908 api_server.go:88] waiting for apiserver healthz status ...
	I0127 15:40:53.771611 1074908 api_server.go:253] Checking apiserver healthz at https://192.168.61.43:8443/healthz ...
	I0127 15:40:53.779085 1074908 api_server.go:279] https://192.168.61.43:8443/healthz returned 200:
	ok
	I0127 15:40:53.780297 1074908 api_server.go:141] control plane version: v1.32.1
	I0127 15:40:53.780325 1074908 api_server.go:131] duration metric: took 8.731633ms to wait for apiserver health ...
	I0127 15:40:53.780335 1074908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 15:40:53.788343 1074908 system_pods.go:59] 9 kube-system pods found
	I0127 15:40:53.788373 1074908 system_pods.go:61] "coredns-668d6bf9bc-2ggkc" [ae4bf072-7cfb-4a26-8c71-abd3cbc52c28] Running
	I0127 15:40:53.788380 1074908 system_pods.go:61] "coredns-668d6bf9bc-h92kp" [5c29333b-4ea9-44fa-8be6-c350e6b709fe] Running
	I0127 15:40:53.788384 1074908 system_pods.go:61] "etcd-embed-certs-349782" [fcb552ae-bb9e-49de-a183-a26f8cac7e56] Running
	I0127 15:40:53.788388 1074908 system_pods.go:61] "kube-apiserver-embed-certs-349782" [5161cdd2-9cea-4b6d-9023-b20f56e14d9c] Running
	I0127 15:40:53.788392 1074908 system_pods.go:61] "kube-controller-manager-embed-certs-349782" [defbaf3b-e25a-4e20-a602-4be47bd2cc4b] Running
	I0127 15:40:53.788395 1074908 system_pods.go:61] "kube-proxy-vhpzl" [1bb477a3-24b0-4a0e-9bf1-ce5794d2cdbf] Running
	I0127 15:40:53.788398 1074908 system_pods.go:61] "kube-scheduler-embed-certs-349782" [ed785153-6f53-4289-a191-5545960c300f] Running
	I0127 15:40:53.788404 1074908 system_pods.go:61] "metrics-server-f79f97bbb-pnbcx" [af453586-d131-4ba7-aa9f-290eb044d58e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 15:40:53.788411 1074908 system_pods.go:61] "storage-provisioner" [e5c6e59a-52ab-4707-a438-5d01890928db] Running
	I0127 15:40:53.788422 1074908 system_pods.go:74] duration metric: took 8.079129ms to wait for pod list to return data ...
	I0127 15:40:53.788430 1074908 default_sa.go:34] waiting for default service account to be created ...
	I0127 15:40:52.793113 1075160 out.go:235]   - Generating certificates and keys ...
	I0127 15:40:52.793243 1075160 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:40:52.793339 1075160 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:40:52.793480 1075160 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:40:52.793582 1075160 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:40:52.793692 1075160 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:40:52.793783 1075160 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:40:52.793875 1075160 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:40:52.793966 1075160 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:40:52.794100 1075160 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:40:52.794204 1075160 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:40:52.794273 1075160 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:40:52.794363 1075160 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:40:52.989346 1075160 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:40:53.518286 1075160 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 15:40:53.684220 1075160 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:40:53.833269 1075160 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:40:53.959433 1075160 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:40:53.959944 1075160 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:40:53.962645 1075160 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:40:53.964848 1075160 out.go:235]   - Booting up control plane ...
	I0127 15:40:53.964986 1075160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:40:53.965139 1075160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:40:53.967441 1075160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:40:53.990143 1075160 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:40:53.997601 1075160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:40:53.997684 1075160 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:40:54.175814 1075160 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 15:40:54.175985 1075160 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 15:40:54.677251 1075160 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.539769ms
	I0127 15:40:54.677364 1075160 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 15:40:53.477359 1076050 cri.go:89] found id: ""
	I0127 15:40:53.477389 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.477400 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:53.477408 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:53.477473 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:53.522739 1076050 cri.go:89] found id: ""
	I0127 15:40:53.522777 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.522789 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:53.522798 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:53.522870 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:53.591524 1076050 cri.go:89] found id: ""
	I0127 15:40:53.591556 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.591568 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:53.591581 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:53.591601 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:53.645459 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:53.645495 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:53.662522 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:53.662551 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:53.743915 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:53.743940 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:53.743957 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:53.844477 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:53.844511 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:56.390836 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:56.404803 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:56.404892 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:56.448556 1076050 cri.go:89] found id: ""
	I0127 15:40:56.448586 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.448597 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:56.448606 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:56.448674 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:56.501798 1076050 cri.go:89] found id: ""
	I0127 15:40:56.501833 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.501854 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:56.501863 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:56.501932 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:56.549831 1076050 cri.go:89] found id: ""
	I0127 15:40:56.549882 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.549895 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:56.549904 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:56.549976 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:56.604199 1076050 cri.go:89] found id: ""
	I0127 15:40:56.604236 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.604248 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:56.604258 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:56.604361 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:56.662492 1076050 cri.go:89] found id: ""
	I0127 15:40:56.662529 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.662540 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:56.662550 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:56.662621 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:56.712694 1076050 cri.go:89] found id: ""
	I0127 15:40:56.712731 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.712743 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:56.712752 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:56.712821 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:56.759321 1076050 cri.go:89] found id: ""
	I0127 15:40:56.759355 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.759366 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:56.759375 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:56.759441 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:56.806457 1076050 cri.go:89] found id: ""
	I0127 15:40:56.806487 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.806499 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:56.806511 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:56.806528 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:56.885361 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:56.885416 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:56.904333 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:56.904390 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:57.003794 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:57.003820 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:57.003845 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:57.107181 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:57.107240 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:53.791640 1074908 default_sa.go:45] found service account: "default"
	I0127 15:40:53.791671 1074908 default_sa.go:55] duration metric: took 3.229036ms for default service account to be created ...
	I0127 15:40:53.791682 1074908 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 15:40:53.798897 1074908 system_pods.go:87] 9 kube-system pods found
	I0127 15:41:00.679789 1075160 kubeadm.go:310] [api-check] The API server is healthy after 6.002206079s
	I0127 15:41:00.695507 1075160 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 15:41:00.712356 1075160 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 15:41:00.738343 1075160 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 15:41:00.738640 1075160 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-912913 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 15:41:00.753238 1075160 kubeadm.go:310] [bootstrap-token] Using token: 5gsmwo.93b5mx0ng9gboctz
	I0127 15:41:00.754589 1075160 out.go:235]   - Configuring RBAC rules ...
	I0127 15:41:00.754718 1075160 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 15:41:00.773508 1075160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 15:41:00.781170 1075160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 15:41:00.784358 1075160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 15:41:00.787629 1075160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 15:41:00.790904 1075160 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 15:41:01.087298 1075160 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 15:41:01.539193 1075160 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 15:41:02.088850 1075160 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 15:41:02.089949 1075160 kubeadm.go:310] 
	I0127 15:41:02.090088 1075160 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 15:41:02.090112 1075160 kubeadm.go:310] 
	I0127 15:41:02.090212 1075160 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 15:41:02.090222 1075160 kubeadm.go:310] 
	I0127 15:41:02.090256 1075160 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 15:41:02.090363 1075160 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 15:41:02.090438 1075160 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 15:41:02.090447 1075160 kubeadm.go:310] 
	I0127 15:41:02.090529 1075160 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 15:41:02.090542 1075160 kubeadm.go:310] 
	I0127 15:41:02.090605 1075160 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 15:41:02.090612 1075160 kubeadm.go:310] 
	I0127 15:41:02.090674 1075160 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 15:41:02.090813 1075160 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 15:41:02.090903 1075160 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 15:41:02.090913 1075160 kubeadm.go:310] 
	I0127 15:41:02.091020 1075160 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 15:41:02.091116 1075160 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 15:41:02.091126 1075160 kubeadm.go:310] 
	I0127 15:41:02.091223 1075160 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 5gsmwo.93b5mx0ng9gboctz \
	I0127 15:41:02.091357 1075160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 \
	I0127 15:41:02.091383 1075160 kubeadm.go:310] 	--control-plane 
	I0127 15:41:02.091393 1075160 kubeadm.go:310] 
	I0127 15:41:02.091482 1075160 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 15:41:02.091490 1075160 kubeadm.go:310] 
	I0127 15:41:02.091576 1075160 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5gsmwo.93b5mx0ng9gboctz \
	I0127 15:41:02.091686 1075160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 
	I0127 15:41:02.093055 1075160 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:41:02.093120 1075160 cni.go:84] Creating CNI manager for ""
	I0127 15:41:02.093134 1075160 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:41:02.095065 1075160 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 15:41:02.096511 1075160 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 15:41:02.110508 1075160 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 15:41:02.132628 1075160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 15:41:02.132723 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:02.132745 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-912913 minikube.k8s.io/updated_at=2025_01_27T15_41_02_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743 minikube.k8s.io/name=default-k8s-diff-port-912913 minikube.k8s.io/primary=true
	I0127 15:41:02.380721 1075160 ops.go:34] apiserver oom_adj: -16
	I0127 15:41:02.380856 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:59.656976 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:59.675626 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:59.675762 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:59.719313 1076050 cri.go:89] found id: ""
	I0127 15:40:59.719343 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.719351 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:59.719357 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:59.719441 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:59.758380 1076050 cri.go:89] found id: ""
	I0127 15:40:59.758419 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.758433 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:59.758441 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:59.758511 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:59.802754 1076050 cri.go:89] found id: ""
	I0127 15:40:59.802787 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.802798 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:59.802806 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:59.802874 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:59.847665 1076050 cri.go:89] found id: ""
	I0127 15:40:59.847695 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.847707 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:59.847716 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:59.847781 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:59.888840 1076050 cri.go:89] found id: ""
	I0127 15:40:59.888867 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.888875 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:59.888882 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:59.888946 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:59.935416 1076050 cri.go:89] found id: ""
	I0127 15:40:59.935448 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.935460 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:59.935468 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:59.935544 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:59.982418 1076050 cri.go:89] found id: ""
	I0127 15:40:59.982448 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.982456 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:59.982464 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:59.982539 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:00.024752 1076050 cri.go:89] found id: ""
	I0127 15:41:00.024794 1076050 logs.go:282] 0 containers: []
	W0127 15:41:00.024806 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:00.024820 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:00.024839 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:00.044330 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:00.044369 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:00.130115 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:00.130216 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:00.130241 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:00.236534 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:00.236585 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:00.312265 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:00.312307 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:02.873155 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:02.889623 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:02.889689 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:02.931491 1076050 cri.go:89] found id: ""
	I0127 15:41:02.931528 1076050 logs.go:282] 0 containers: []
	W0127 15:41:02.931537 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:02.931546 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:02.931615 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:02.968872 1076050 cri.go:89] found id: ""
	I0127 15:41:02.968912 1076050 logs.go:282] 0 containers: []
	W0127 15:41:02.968924 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:02.968932 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:02.969030 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:03.004397 1076050 cri.go:89] found id: ""
	I0127 15:41:03.004428 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.004437 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:03.004443 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:03.004498 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:03.042909 1076050 cri.go:89] found id: ""
	I0127 15:41:03.042937 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.042948 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:03.042955 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:03.043020 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:03.081525 1076050 cri.go:89] found id: ""
	I0127 15:41:03.081556 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.081567 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:03.081576 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:03.081645 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:03.122741 1076050 cri.go:89] found id: ""
	I0127 15:41:03.122773 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.122784 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:03.122793 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:03.122855 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:03.159043 1076050 cri.go:89] found id: ""
	I0127 15:41:03.159069 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.159077 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:03.159090 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:03.159140 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:03.200367 1076050 cri.go:89] found id: ""
	I0127 15:41:03.200402 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.200414 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:03.200429 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:03.200447 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:03.291239 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:03.291291 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:03.336057 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:03.336098 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:03.395428 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:03.395480 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:03.411878 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:03.411911 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 15:41:02.881961 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:03.381153 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:03.881177 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:04.381381 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:04.881601 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:05.381394 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:05.881197 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:05.963844 1075160 kubeadm.go:1113] duration metric: took 3.831201657s to wait for elevateKubeSystemPrivileges
	I0127 15:41:05.963884 1075160 kubeadm.go:394] duration metric: took 5m3.006407652s to StartCluster
	I0127 15:41:05.963905 1075160 settings.go:142] acquiring lock: {Name:mk76722a8563186e0a733747d866232f054026c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:41:05.964014 1075160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:41:05.966708 1075160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:41:05.967090 1075160 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.160 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 15:41:05.967165 1075160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 15:41:05.967282 1075160 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-912913"
	I0127 15:41:05.967302 1075160 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-912913"
	W0127 15:41:05.967308 1075160 addons.go:247] addon storage-provisioner should already be in state true
	I0127 15:41:05.967326 1075160 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-912913"
	I0127 15:41:05.967343 1075160 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-912913"
	I0127 15:41:05.967355 1075160 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-912913"
	I0127 15:41:05.967358 1075160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-912913"
	I0127 15:41:05.967357 1075160 config.go:182] Loaded profile config "default-k8s-diff-port-912913": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:41:05.967356 1075160 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-912913"
	I0127 15:41:05.967381 1075160 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-912913"
	W0127 15:41:05.967390 1075160 addons.go:247] addon dashboard should already be in state true
	W0127 15:41:05.967362 1075160 addons.go:247] addon metrics-server should already be in state true
	I0127 15:41:05.967334 1075160 host.go:66] Checking if "default-k8s-diff-port-912913" exists ...
	I0127 15:41:05.967433 1075160 host.go:66] Checking if "default-k8s-diff-port-912913" exists ...
	I0127 15:41:05.967433 1075160 host.go:66] Checking if "default-k8s-diff-port-912913" exists ...
	I0127 15:41:05.967803 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.967829 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.967842 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.967854 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.967866 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.967894 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.967857 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.967954 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.968953 1075160 out.go:177] * Verifying Kubernetes components...
	I0127 15:41:05.970726 1075160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:41:05.986076 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37879
	I0127 15:41:05.986613 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:05.987340 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:05.987367 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:05.987696 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40949
	I0127 15:41:05.987879 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35511
	I0127 15:41:05.987883 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45871
	I0127 15:41:05.987924 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:05.988072 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:05.988235 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:05.988485 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:05.988597 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.988641 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.988725 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:05.988745 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:05.988760 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:05.988775 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:05.989142 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:05.989164 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:05.989172 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:05.989192 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:05.989534 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:05.989721 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:05.989770 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.989789 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.989815 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.989827 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.993646 1075160 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-912913"
	W0127 15:41:05.993672 1075160 addons.go:247] addon default-storageclass should already be in state true
	I0127 15:41:05.993703 1075160 host.go:66] Checking if "default-k8s-diff-port-912913" exists ...
	I0127 15:41:05.994089 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.994137 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:06.007391 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36511
	I0127 15:41:06.007784 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39871
	I0127 15:41:06.008229 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.008327 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.008859 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.008880 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.008951 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I0127 15:41:06.009182 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.009201 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.009660 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.009740 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.009876 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:06.010328 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.010393 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.010588 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.010748 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:06.010833 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.025187 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:06.025199 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:41:06.025187 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:41:06.025186 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46703
	I0127 15:41:06.037186 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:41:06.037801 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.038419 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.038439 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.038833 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.039733 1075160 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 15:41:06.039865 1075160 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:41:06.039911 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:06.039947 1075160 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 15:41:06.039975 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:06.041831 1075160 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 15:41:06.041853 1075160 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 15:41:06.041887 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:41:06.042817 1075160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:41:06.042833 1075160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 15:41:06.042854 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:41:06.045474 1075160 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 15:41:06.047233 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.047253 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 15:41:06.047270 1075160 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 15:41:06.047294 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:41:06.047965 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:41:06.048037 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.048421 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:41:06.048675 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:41:06.049034 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:41:06.049616 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:41:06.051299 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.051321 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.051717 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:41:06.051739 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.052033 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:41:06.052054 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.052088 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:41:06.052323 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:41:06.052372 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:41:06.052526 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:41:06.052702 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:41:06.057244 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:41:06.057489 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:41:06.057880 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:41:06.058959 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39803
	I0127 15:41:06.059421 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.059854 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.059866 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.060259 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.060421 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:06.062233 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:41:06.062753 1075160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 15:41:06.062767 1075160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 15:41:06.062781 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:41:06.067605 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.068014 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:41:06.068027 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.068243 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:41:06.068368 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:41:06.068559 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:41:06.068695 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:41:06.211887 1075160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:41:06.257549 1075160 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-912913" to be "Ready" ...
	I0127 15:41:06.305423 1075160 node_ready.go:49] node "default-k8s-diff-port-912913" has status "Ready":"True"
	I0127 15:41:06.305459 1075160 node_ready.go:38] duration metric: took 47.864404ms for node "default-k8s-diff-port-912913" to be "Ready" ...
	I0127 15:41:06.305474 1075160 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:41:06.311746 1075160 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 15:41:06.311780 1075160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 15:41:06.329198 1075160 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:06.374086 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 15:41:06.374119 1075160 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 15:41:06.377742 1075160 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 15:41:06.377771 1075160 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 15:41:06.400332 1075160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:41:06.403004 1075160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 15:41:06.430195 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 15:41:06.430217 1075160 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 15:41:06.487574 1075160 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:41:06.487605 1075160 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 15:41:06.529999 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 15:41:06.530054 1075160 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 15:41:06.609758 1075160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:41:06.619520 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 15:41:06.619567 1075160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 15:41:06.795826 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 15:41:06.795870 1075160 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 15:41:06.889910 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 15:41:06.889940 1075160 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 15:41:06.979355 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 15:41:06.979391 1075160 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 15:41:07.053404 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 15:41:07.053438 1075160 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 15:41:07.101199 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:41:07.101235 1075160 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 15:41:07.165859 1075160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:41:07.419725 1075160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.016680012s)
	I0127 15:41:07.419820 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.419839 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.419841 1075160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.019463574s)
	I0127 15:41:07.419916 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.419939 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.420292 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.420306 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.420322 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.420352 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.420365 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.420366 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.420492 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.420521 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.420530 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.420538 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.420775 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.420779 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.420786 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.420814 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.420842 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.420849 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.438640 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.438681 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.439056 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.439081 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.439091 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	W0127 15:41:03.498183 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:06.000178 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:06.024915 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:06.024973 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:06.098332 1076050 cri.go:89] found id: ""
	I0127 15:41:06.098361 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.098369 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:06.098375 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:06.098430 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:06.156082 1076050 cri.go:89] found id: ""
	I0127 15:41:06.156117 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.156129 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:06.156137 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:06.156203 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:06.217204 1076050 cri.go:89] found id: ""
	I0127 15:41:06.217235 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.217246 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:06.217255 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:06.217331 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:06.259003 1076050 cri.go:89] found id: ""
	I0127 15:41:06.259029 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.259041 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:06.259048 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:06.259123 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:06.298292 1076050 cri.go:89] found id: ""
	I0127 15:41:06.298330 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.298341 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:06.298349 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:06.298416 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:06.339173 1076050 cri.go:89] found id: ""
	I0127 15:41:06.339211 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.339224 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:06.339234 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:06.339309 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:06.381271 1076050 cri.go:89] found id: ""
	I0127 15:41:06.381300 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.381311 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:06.381320 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:06.381385 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:06.429073 1076050 cri.go:89] found id: ""
	I0127 15:41:06.429134 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.429149 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:06.429164 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:06.429187 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:06.491509 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:06.491545 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:06.507964 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:06.508011 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:06.589122 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:06.589158 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:06.589173 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:06.668992 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:06.669051 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:07.791715 1075160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.18189835s)
	I0127 15:41:07.791796 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.791813 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.792148 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.792170 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.792181 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.792190 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.792522 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.792570 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.792580 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.792591 1075160 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-912913"
	I0127 15:41:08.375027 1075160 pod_ready.go:103] pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"False"
	I0127 15:41:08.535318 1075160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.369395363s)
	I0127 15:41:08.535382 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:08.535398 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:08.535779 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:08.535833 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:08.535847 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:08.535857 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:08.536129 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:08.536152 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:08.537800 1075160 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-912913 addons enable metrics-server
	
	I0127 15:41:08.539323 1075160 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 15:41:08.540713 1075160 addons.go:514] duration metric: took 2.57355558s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 15:41:10.869256 1075160 pod_ready.go:103] pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"False"
	I0127 15:41:09.224594 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:09.239525 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:09.239616 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:09.285116 1076050 cri.go:89] found id: ""
	I0127 15:41:09.285160 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.285172 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:09.285182 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:09.285252 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:09.342278 1076050 cri.go:89] found id: ""
	I0127 15:41:09.342307 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.342323 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:09.342332 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:09.342397 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:09.385479 1076050 cri.go:89] found id: ""
	I0127 15:41:09.385506 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.385515 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:09.385521 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:09.385580 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:09.426386 1076050 cri.go:89] found id: ""
	I0127 15:41:09.426426 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.426439 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:09.426448 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:09.426516 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:09.468739 1076050 cri.go:89] found id: ""
	I0127 15:41:09.468776 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.468789 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:09.468798 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:09.468866 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:09.510885 1076050 cri.go:89] found id: ""
	I0127 15:41:09.510918 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.510931 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:09.510939 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:09.511007 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:09.548406 1076050 cri.go:89] found id: ""
	I0127 15:41:09.548442 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.548455 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:09.548464 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:09.548547 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:09.589727 1076050 cri.go:89] found id: ""
	I0127 15:41:09.589761 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.589773 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:09.589786 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:09.589802 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:09.641717 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:09.641759 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:09.712152 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:09.712220 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:09.730069 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:09.730119 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:09.808412 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:09.808447 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:09.808462 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:12.421654 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:12.440156 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:12.440298 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:12.489759 1076050 cri.go:89] found id: ""
	I0127 15:41:12.489788 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.489800 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:12.489809 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:12.489887 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:12.540068 1076050 cri.go:89] found id: ""
	I0127 15:41:12.540099 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.540108 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:12.540114 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:12.540178 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:12.587471 1076050 cri.go:89] found id: ""
	I0127 15:41:12.587497 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.587505 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:12.587511 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:12.587578 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:12.638634 1076050 cri.go:89] found id: ""
	I0127 15:41:12.638668 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.638680 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:12.638689 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:12.638762 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:12.683784 1076050 cri.go:89] found id: ""
	I0127 15:41:12.683815 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.683826 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:12.683837 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:12.683900 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:12.720438 1076050 cri.go:89] found id: ""
	I0127 15:41:12.720479 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.720488 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:12.720495 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:12.720548 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:12.759175 1076050 cri.go:89] found id: ""
	I0127 15:41:12.759207 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.759219 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:12.759226 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:12.759290 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:12.792624 1076050 cri.go:89] found id: ""
	I0127 15:41:12.792656 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.792668 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:12.792681 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:12.792697 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:12.878341 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:12.878386 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:12.926986 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:12.927028 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:12.982133 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:12.982172 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:12.999460 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:12.999503 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:13.087892 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:13.336050 1075160 pod_ready.go:103] pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"False"
	I0127 15:41:15.338501 1075160 pod_ready.go:93] pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:41:15.338533 1075160 pod_ready.go:82] duration metric: took 9.009294324s for pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.338546 1075160 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.343866 1075160 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:41:15.343889 1075160 pod_ready.go:82] duration metric: took 5.336104ms for pod "kube-apiserver-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.343898 1075160 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.349389 1075160 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:41:15.349413 1075160 pod_ready.go:82] duration metric: took 5.508752ms for pod "kube-controller-manager-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.349422 1075160 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.355144 1075160 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:41:15.355166 1075160 pod_ready.go:82] duration metric: took 5.737289ms for pod "kube-scheduler-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.355173 1075160 pod_ready.go:39] duration metric: took 9.049686447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:41:15.355191 1075160 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:41:15.355243 1075160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:15.370942 1075160 api_server.go:72] duration metric: took 9.403809848s to wait for apiserver process to appear ...
	I0127 15:41:15.370967 1075160 api_server.go:88] waiting for apiserver healthz status ...
	I0127 15:41:15.370986 1075160 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8444/healthz ...
	I0127 15:41:15.378733 1075160 api_server.go:279] https://192.168.39.160:8444/healthz returned 200:
	ok
	I0127 15:41:15.380614 1075160 api_server.go:141] control plane version: v1.32.1
	I0127 15:41:15.380640 1075160 api_server.go:131] duration metric: took 9.666454ms to wait for apiserver health ...
	I0127 15:41:15.380649 1075160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 15:41:15.390107 1075160 system_pods.go:59] 9 kube-system pods found
	I0127 15:41:15.390141 1075160 system_pods.go:61] "coredns-668d6bf9bc-8rzrt" [92e346ae-cc28-4f80-9424-c4d97ac8106c] Running
	I0127 15:41:15.390147 1075160 system_pods.go:61] "coredns-668d6bf9bc-zw9rm" [c29a853d-5146-4641-a434-d85147dc3b16] Running
	I0127 15:41:15.390151 1075160 system_pods.go:61] "etcd-default-k8s-diff-port-912913" [4eb15463-b135-4347-9c0b-ff5cd9fa0991] Running
	I0127 15:41:15.390155 1075160 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-912913" [f1d151d9-bd66-41f1-b2e8-bb495f8a3522] Running
	I0127 15:41:15.390159 1075160 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-912913" [da81a47f-a89e-4daa-828c-e1dc1458067c] Running
	I0127 15:41:15.390161 1075160 system_pods.go:61] "kube-proxy-k85rn" [8da8dc48-3019-4fa6-b5c4-58b0b41aefc0] Running
	I0127 15:41:15.390165 1075160 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-912913" [9042c262-515d-40d9-9d99-fda8f49b141a] Running
	I0127 15:41:15.390170 1075160 system_pods.go:61] "metrics-server-f79f97bbb-rtx6b" [aed61473-0cc8-4459-9153-5c42e5a10b2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 15:41:15.390174 1075160 system_pods.go:61] "storage-provisioner" [5fa7b229-cd7d-4aa4-9cee-26a1c5714b3c] Running
	I0127 15:41:15.390184 1075160 system_pods.go:74] duration metric: took 9.526361ms to wait for pod list to return data ...
	I0127 15:41:15.390193 1075160 default_sa.go:34] waiting for default service account to be created ...
	I0127 15:41:15.394345 1075160 default_sa.go:45] found service account: "default"
	I0127 15:41:15.394371 1075160 default_sa.go:55] duration metric: took 4.169137ms for default service account to be created ...
	I0127 15:41:15.394380 1075160 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 15:41:15.537654 1075160 system_pods.go:87] 9 kube-system pods found
	I0127 15:41:15.589166 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:15.607749 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:15.607824 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:15.655722 1076050 cri.go:89] found id: ""
	I0127 15:41:15.655752 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.655764 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:15.655773 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:15.655847 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:15.703202 1076050 cri.go:89] found id: ""
	I0127 15:41:15.703235 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.703248 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:15.703256 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:15.703360 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:15.747335 1076050 cri.go:89] found id: ""
	I0127 15:41:15.747371 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.747383 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:15.747400 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:15.747470 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:15.786207 1076050 cri.go:89] found id: ""
	I0127 15:41:15.786245 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.786259 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:15.786269 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:15.786351 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:15.826251 1076050 cri.go:89] found id: ""
	I0127 15:41:15.826286 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.826298 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:15.826306 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:15.826435 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:15.873134 1076050 cri.go:89] found id: ""
	I0127 15:41:15.873167 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.873187 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:15.873195 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:15.873267 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:15.923221 1076050 cri.go:89] found id: ""
	I0127 15:41:15.923273 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.923286 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:15.923294 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:15.923364 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:15.967245 1076050 cri.go:89] found id: ""
	I0127 15:41:15.967282 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.967295 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:15.967309 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:15.967325 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:16.057675 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:16.057706 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:16.057722 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:16.141133 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:16.141181 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:16.186832 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:16.186869 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:16.255430 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:16.255473 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:18.774206 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:18.792191 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:18.792258 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:18.835636 1076050 cri.go:89] found id: ""
	I0127 15:41:18.835674 1076050 logs.go:282] 0 containers: []
	W0127 15:41:18.835685 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:18.835693 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:18.835763 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:18.875370 1076050 cri.go:89] found id: ""
	I0127 15:41:18.875423 1076050 logs.go:282] 0 containers: []
	W0127 15:41:18.875435 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:18.875444 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:18.875517 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:18.915439 1076050 cri.go:89] found id: ""
	I0127 15:41:18.915469 1076050 logs.go:282] 0 containers: []
	W0127 15:41:18.915480 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:18.915489 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:18.915554 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:18.962331 1076050 cri.go:89] found id: ""
	I0127 15:41:18.962359 1076050 logs.go:282] 0 containers: []
	W0127 15:41:18.962366 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:18.962372 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:18.962425 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:19.017809 1076050 cri.go:89] found id: ""
	I0127 15:41:19.017839 1076050 logs.go:282] 0 containers: []
	W0127 15:41:19.017849 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:19.017857 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:19.017924 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:19.066418 1076050 cri.go:89] found id: ""
	I0127 15:41:19.066454 1076050 logs.go:282] 0 containers: []
	W0127 15:41:19.066463 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:19.066469 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:19.066540 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:19.107181 1076050 cri.go:89] found id: ""
	I0127 15:41:19.107212 1076050 logs.go:282] 0 containers: []
	W0127 15:41:19.107221 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:19.107227 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:19.107286 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:19.148999 1076050 cri.go:89] found id: ""
	I0127 15:41:19.149043 1076050 logs.go:282] 0 containers: []
	W0127 15:41:19.149055 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:19.149070 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:19.149093 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:19.235472 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:19.235514 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:19.290762 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:19.290794 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:19.349155 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:19.349201 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:19.365924 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:19.365957 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:19.455480 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:21.957147 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:21.971580 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:21.971732 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:22.011493 1076050 cri.go:89] found id: ""
	I0127 15:41:22.011523 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.011531 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:22.011537 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:22.011600 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:22.047592 1076050 cri.go:89] found id: ""
	I0127 15:41:22.047615 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.047623 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:22.047635 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:22.047704 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:22.084231 1076050 cri.go:89] found id: ""
	I0127 15:41:22.084258 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.084266 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:22.084272 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:22.084331 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:22.126843 1076050 cri.go:89] found id: ""
	I0127 15:41:22.126870 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.126881 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:22.126890 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:22.126952 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:22.167538 1076050 cri.go:89] found id: ""
	I0127 15:41:22.167563 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.167572 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:22.167579 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:22.167633 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:22.206138 1076050 cri.go:89] found id: ""
	I0127 15:41:22.206169 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.206180 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:22.206193 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:22.206259 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:22.245152 1076050 cri.go:89] found id: ""
	I0127 15:41:22.245186 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.245199 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:22.245207 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:22.245273 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:22.280780 1076050 cri.go:89] found id: ""
	I0127 15:41:22.280820 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.280831 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:22.280844 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:22.280859 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:22.333940 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:22.333975 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:22.348880 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:22.348910 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:22.421581 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:22.421610 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:22.421625 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:22.502157 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:22.502199 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:25.045123 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:25.058997 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:25.059058 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:25.094852 1076050 cri.go:89] found id: ""
	I0127 15:41:25.094881 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.094888 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:25.094896 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:25.094955 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:25.136390 1076050 cri.go:89] found id: ""
	I0127 15:41:25.136414 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.136424 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:25.136432 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:25.136491 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:25.173187 1076050 cri.go:89] found id: ""
	I0127 15:41:25.173213 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.173221 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:25.173226 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:25.173284 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:25.210946 1076050 cri.go:89] found id: ""
	I0127 15:41:25.210977 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.210990 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:25.210999 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:25.211082 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:25.251607 1076050 cri.go:89] found id: ""
	I0127 15:41:25.251633 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.251643 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:25.251649 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:25.251702 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:25.286803 1076050 cri.go:89] found id: ""
	I0127 15:41:25.286831 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.286842 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:25.286849 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:25.286914 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:25.322818 1076050 cri.go:89] found id: ""
	I0127 15:41:25.322846 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.322857 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:25.322866 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:25.322936 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:25.361082 1076050 cri.go:89] found id: ""
	I0127 15:41:25.361110 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.361120 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:25.361130 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:25.361142 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:25.412378 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:25.412416 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:25.427170 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:25.427206 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:25.498342 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:25.498377 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:25.498393 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:25.589099 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:25.589152 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:28.130224 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:28.145326 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:28.145389 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:28.186258 1076050 cri.go:89] found id: ""
	I0127 15:41:28.186293 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.186316 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:28.186326 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:28.186408 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:28.224332 1076050 cri.go:89] found id: ""
	I0127 15:41:28.224370 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.224382 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:28.224393 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:28.224462 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:28.262236 1076050 cri.go:89] found id: ""
	I0127 15:41:28.262267 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.262274 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:28.262282 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:28.262334 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:28.299248 1076050 cri.go:89] found id: ""
	I0127 15:41:28.299281 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.299290 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:28.299300 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:28.299358 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:28.340255 1076050 cri.go:89] found id: ""
	I0127 15:41:28.340289 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.340301 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:28.340326 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:28.340396 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:28.384857 1076050 cri.go:89] found id: ""
	I0127 15:41:28.384891 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.384903 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:28.384912 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:28.384983 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:28.428121 1076050 cri.go:89] found id: ""
	I0127 15:41:28.428158 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.428169 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:28.428179 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:28.428248 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:28.473305 1076050 cri.go:89] found id: ""
	I0127 15:41:28.473332 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.473340 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:28.473350 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:28.473368 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:28.571238 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:28.571271 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:28.571316 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:28.651696 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:28.651731 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:28.692842 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:28.692870 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:28.748091 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:28.748133 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:31.262275 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:31.278085 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:31.278174 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:31.313339 1076050 cri.go:89] found id: ""
	I0127 15:41:31.313366 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.313375 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:31.313381 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:31.313450 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:31.351690 1076050 cri.go:89] found id: ""
	I0127 15:41:31.351716 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.351726 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:31.351732 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:31.351797 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:31.387516 1076050 cri.go:89] found id: ""
	I0127 15:41:31.387547 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.387556 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:31.387562 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:31.387617 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:31.422030 1076050 cri.go:89] found id: ""
	I0127 15:41:31.422062 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.422070 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:31.422076 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:31.422134 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:31.458563 1076050 cri.go:89] found id: ""
	I0127 15:41:31.458592 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.458604 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:31.458612 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:31.458679 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:31.496029 1076050 cri.go:89] found id: ""
	I0127 15:41:31.496064 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.496075 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:31.496090 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:31.496156 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:31.543782 1076050 cri.go:89] found id: ""
	I0127 15:41:31.543808 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.543816 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:31.543822 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:31.543874 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:31.581950 1076050 cri.go:89] found id: ""
	I0127 15:41:31.581987 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.582001 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:31.582014 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:31.582032 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:31.653329 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:31.653358 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:31.653374 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:31.736286 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:31.736323 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:31.782977 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:31.783009 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:31.842741 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:31.842773 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:34.357158 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:34.370137 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:34.370204 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:34.414297 1076050 cri.go:89] found id: ""
	I0127 15:41:34.414334 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.414347 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:34.414356 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:34.414437 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:34.450717 1076050 cri.go:89] found id: ""
	I0127 15:41:34.450749 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.450759 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:34.450767 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:34.450832 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:34.490881 1076050 cri.go:89] found id: ""
	I0127 15:41:34.490915 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.490928 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:34.490937 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:34.491012 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:34.526240 1076050 cri.go:89] found id: ""
	I0127 15:41:34.526277 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.526289 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:34.526297 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:34.526365 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:34.562664 1076050 cri.go:89] found id: ""
	I0127 15:41:34.562700 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.562712 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:34.562721 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:34.562788 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:34.600382 1076050 cri.go:89] found id: ""
	I0127 15:41:34.600411 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.600422 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:34.600430 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:34.600496 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:34.636399 1076050 cri.go:89] found id: ""
	I0127 15:41:34.636431 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.636443 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:34.636451 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:34.636518 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:34.676900 1076050 cri.go:89] found id: ""
	I0127 15:41:34.676935 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.676948 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:34.676961 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:34.676975 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:34.730519 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:34.730555 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:34.746159 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:34.746188 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:34.823410 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:34.823447 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:34.823468 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:34.907572 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:34.907611 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:37.485412 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:37.499659 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:37.499761 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:37.536578 1076050 cri.go:89] found id: ""
	I0127 15:41:37.536608 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.536618 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:37.536627 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:37.536703 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:37.573737 1076050 cri.go:89] found id: ""
	I0127 15:41:37.573773 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.573783 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:37.573790 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:37.573861 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:37.611200 1076050 cri.go:89] found id: ""
	I0127 15:41:37.611232 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.611241 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:37.611248 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:37.611302 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:37.646784 1076050 cri.go:89] found id: ""
	I0127 15:41:37.646812 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.646823 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:37.646832 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:37.646900 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:37.684664 1076050 cri.go:89] found id: ""
	I0127 15:41:37.684694 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.684706 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:37.684714 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:37.684777 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:37.721812 1076050 cri.go:89] found id: ""
	I0127 15:41:37.721850 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.721863 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:37.721874 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:37.721944 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:37.759256 1076050 cri.go:89] found id: ""
	I0127 15:41:37.759279 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.759287 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:37.759293 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:37.759345 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:37.798971 1076050 cri.go:89] found id: ""
	I0127 15:41:37.799004 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.799017 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:37.799030 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:37.799041 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:37.855679 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:37.855719 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:37.869799 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:37.869833 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:37.943918 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:37.943944 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:37.943956 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:38.035563 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:38.035611 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:40.581178 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:40.597341 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:40.597409 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:40.634799 1076050 cri.go:89] found id: ""
	I0127 15:41:40.634827 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.634836 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:40.634843 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:40.634910 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:40.684392 1076050 cri.go:89] found id: ""
	I0127 15:41:40.684421 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.684429 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:40.684437 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:40.684504 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:40.729085 1076050 cri.go:89] found id: ""
	I0127 15:41:40.729120 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.729131 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:40.729139 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:40.729212 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:40.778437 1076050 cri.go:89] found id: ""
	I0127 15:41:40.778469 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.778482 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:40.778489 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:40.778556 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:40.820889 1076050 cri.go:89] found id: ""
	I0127 15:41:40.820914 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.820922 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:40.820928 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:40.820992 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:40.858256 1076050 cri.go:89] found id: ""
	I0127 15:41:40.858284 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.858296 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:40.858304 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:40.858374 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:40.897931 1076050 cri.go:89] found id: ""
	I0127 15:41:40.897957 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.897966 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:40.897972 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:40.898026 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:40.937068 1076050 cri.go:89] found id: ""
	I0127 15:41:40.937100 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.937111 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:40.937124 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:40.937138 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:41.012844 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:41.012867 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:41.012880 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:41.093680 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:41.093722 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:41.136964 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:41.136996 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:41.190396 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:41.190435 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:43.708328 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:43.722838 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:43.722928 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:43.762360 1076050 cri.go:89] found id: ""
	I0127 15:41:43.762395 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.762407 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:43.762416 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:43.762483 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:43.802226 1076050 cri.go:89] found id: ""
	I0127 15:41:43.802266 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.802279 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:43.802287 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:43.802363 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:43.848037 1076050 cri.go:89] found id: ""
	I0127 15:41:43.848067 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.848081 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:43.848100 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:43.848167 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:43.891393 1076050 cri.go:89] found id: ""
	I0127 15:41:43.891491 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.891506 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:43.891516 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:43.891585 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:43.936352 1076050 cri.go:89] found id: ""
	I0127 15:41:43.936447 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.936467 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:43.936481 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:43.936632 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:43.980165 1076050 cri.go:89] found id: ""
	I0127 15:41:43.980192 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.980200 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:43.980206 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:43.980264 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:44.019889 1076050 cri.go:89] found id: ""
	I0127 15:41:44.019925 1076050 logs.go:282] 0 containers: []
	W0127 15:41:44.019938 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:44.019946 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:44.020005 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:44.057363 1076050 cri.go:89] found id: ""
	I0127 15:41:44.057400 1076050 logs.go:282] 0 containers: []
	W0127 15:41:44.057412 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:44.057426 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:44.057442 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:44.072218 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:44.072249 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:44.148918 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:44.148944 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:44.148960 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:44.231300 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:44.231347 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:44.273468 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:44.273507 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:46.833142 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:46.848106 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:46.848174 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:46.886223 1076050 cri.go:89] found id: ""
	I0127 15:41:46.886250 1076050 logs.go:282] 0 containers: []
	W0127 15:41:46.886258 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:46.886264 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:46.886315 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:46.923854 1076050 cri.go:89] found id: ""
	I0127 15:41:46.923883 1076050 logs.go:282] 0 containers: []
	W0127 15:41:46.923891 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:46.923903 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:46.923956 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:46.962084 1076050 cri.go:89] found id: ""
	I0127 15:41:46.962112 1076050 logs.go:282] 0 containers: []
	W0127 15:41:46.962120 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:46.962128 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:46.962189 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:46.998299 1076050 cri.go:89] found id: ""
	I0127 15:41:46.998329 1076050 logs.go:282] 0 containers: []
	W0127 15:41:46.998338 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:46.998344 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:46.998401 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:47.036481 1076050 cri.go:89] found id: ""
	I0127 15:41:47.036519 1076050 logs.go:282] 0 containers: []
	W0127 15:41:47.036531 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:47.036540 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:47.036606 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:47.072486 1076050 cri.go:89] found id: ""
	I0127 15:41:47.072522 1076050 logs.go:282] 0 containers: []
	W0127 15:41:47.072534 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:47.072543 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:47.072610 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:47.116871 1076050 cri.go:89] found id: ""
	I0127 15:41:47.116912 1076050 logs.go:282] 0 containers: []
	W0127 15:41:47.116937 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:47.116947 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:47.117049 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:47.157060 1076050 cri.go:89] found id: ""
	I0127 15:41:47.157092 1076050 logs.go:282] 0 containers: []
	W0127 15:41:47.157104 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:47.157118 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:47.157135 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:47.210998 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:47.211040 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:47.224898 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:47.224926 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:47.306490 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:47.306521 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:47.306540 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:47.394529 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:47.394582 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:49.942182 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:49.958258 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:49.958321 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:49.997962 1076050 cri.go:89] found id: ""
	I0127 15:41:49.997999 1076050 logs.go:282] 0 containers: []
	W0127 15:41:49.998019 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:49.998029 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:49.998091 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:50.042973 1076050 cri.go:89] found id: ""
	I0127 15:41:50.043007 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.043015 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:50.043021 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:50.043078 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:50.080466 1076050 cri.go:89] found id: ""
	I0127 15:41:50.080496 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.080506 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:50.080514 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:50.080581 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:50.122155 1076050 cri.go:89] found id: ""
	I0127 15:41:50.122187 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.122199 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:50.122208 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:50.122270 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:50.160215 1076050 cri.go:89] found id: ""
	I0127 15:41:50.160245 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.160254 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:50.160262 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:50.160315 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:50.200684 1076050 cri.go:89] found id: ""
	I0127 15:41:50.200710 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.200719 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:50.200724 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:50.200790 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:50.238625 1076050 cri.go:89] found id: ""
	I0127 15:41:50.238650 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.238658 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:50.238664 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:50.238721 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:50.276187 1076050 cri.go:89] found id: ""
	I0127 15:41:50.276217 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.276227 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:50.276238 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:50.276258 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:50.327617 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:50.327675 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:50.343530 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:50.343561 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:50.420740 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:50.420764 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:50.420776 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:50.506757 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:50.506809 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:53.057745 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:53.073259 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:53.073338 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:53.111798 1076050 cri.go:89] found id: ""
	I0127 15:41:53.111831 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.111839 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:53.111849 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:53.111921 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:53.151928 1076050 cri.go:89] found id: ""
	I0127 15:41:53.151959 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.151970 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:53.151978 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:53.152045 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:53.187310 1076050 cri.go:89] found id: ""
	I0127 15:41:53.187357 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.187369 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:53.187377 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:53.187443 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:53.230758 1076050 cri.go:89] found id: ""
	I0127 15:41:53.230786 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.230795 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:53.230800 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:53.230852 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:53.266244 1076050 cri.go:89] found id: ""
	I0127 15:41:53.266276 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.266285 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:53.266291 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:53.266356 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:53.302601 1076050 cri.go:89] found id: ""
	I0127 15:41:53.302628 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.302638 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:53.302647 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:53.302710 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:53.342505 1076050 cri.go:89] found id: ""
	I0127 15:41:53.342541 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.342551 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:53.342561 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:53.342643 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:53.379672 1076050 cri.go:89] found id: ""
	I0127 15:41:53.379706 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.379718 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:53.379730 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:53.379745 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:53.421809 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:53.421852 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:53.475330 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:53.475369 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:53.490625 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:53.490652 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:53.560602 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:53.560627 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:53.560637 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:56.148600 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:56.162485 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:56.162564 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:56.200397 1076050 cri.go:89] found id: ""
	I0127 15:41:56.200434 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.200447 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:56.200458 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:56.200523 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:56.236022 1076050 cri.go:89] found id: ""
	I0127 15:41:56.236067 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.236078 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:56.236086 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:56.236154 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:56.275920 1076050 cri.go:89] found id: ""
	I0127 15:41:56.275956 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.275966 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:56.275975 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:56.276046 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:56.312921 1076050 cri.go:89] found id: ""
	I0127 15:41:56.312953 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.312963 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:56.312971 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:56.313056 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:56.352348 1076050 cri.go:89] found id: ""
	I0127 15:41:56.352373 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.352381 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:56.352387 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:56.352440 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:56.398556 1076050 cri.go:89] found id: ""
	I0127 15:41:56.398591 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.398603 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:56.398617 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:56.398686 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:56.440032 1076050 cri.go:89] found id: ""
	I0127 15:41:56.440063 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.440071 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:56.440078 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:56.440137 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:56.476249 1076050 cri.go:89] found id: ""
	I0127 15:41:56.476280 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.476291 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:56.476305 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:56.476321 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:56.530965 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:56.531017 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:56.545838 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:56.545869 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:56.618187 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:56.618245 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:56.618257 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:56.701048 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:56.701087 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:59.248508 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:59.262851 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:59.262928 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:59.300917 1076050 cri.go:89] found id: ""
	I0127 15:41:59.300947 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.300959 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:59.300967 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:59.301062 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:59.345421 1076050 cri.go:89] found id: ""
	I0127 15:41:59.345452 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.345463 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:59.345471 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:59.345568 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:59.381990 1076050 cri.go:89] found id: ""
	I0127 15:41:59.382025 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.382037 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:59.382046 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:59.382115 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:59.420410 1076050 cri.go:89] found id: ""
	I0127 15:41:59.420456 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.420466 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:59.420472 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:59.420543 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:59.461365 1076050 cri.go:89] found id: ""
	I0127 15:41:59.461391 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.461403 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:59.461412 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:59.461480 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:59.497094 1076050 cri.go:89] found id: ""
	I0127 15:41:59.497122 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.497130 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:59.497136 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:59.497201 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:59.537636 1076050 cri.go:89] found id: ""
	I0127 15:41:59.537663 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.537672 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:59.537680 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:59.537780 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:59.572954 1076050 cri.go:89] found id: ""
	I0127 15:41:59.572984 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.572993 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:59.573023 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:59.573039 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:59.660416 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:59.660457 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:59.702396 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:59.702423 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:59.758534 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:59.758583 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:59.772463 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:59.772496 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:59.849599 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:02.350500 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:02.364408 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:02.364483 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:02.400537 1076050 cri.go:89] found id: ""
	I0127 15:42:02.400574 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.400588 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:02.400596 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:02.400664 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:02.442696 1076050 cri.go:89] found id: ""
	I0127 15:42:02.442731 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.442743 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:02.442751 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:02.442825 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:02.485485 1076050 cri.go:89] found id: ""
	I0127 15:42:02.485511 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.485522 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:02.485529 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:02.485595 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:02.524989 1076050 cri.go:89] found id: ""
	I0127 15:42:02.525036 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.525048 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:02.525057 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:02.525137 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:02.560538 1076050 cri.go:89] found id: ""
	I0127 15:42:02.560567 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.560578 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:02.560586 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:02.560649 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:02.602960 1076050 cri.go:89] found id: ""
	I0127 15:42:02.602996 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.603008 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:02.603017 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:02.603082 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:02.645389 1076050 cri.go:89] found id: ""
	I0127 15:42:02.645415 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.645425 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:02.645436 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:02.645502 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:02.689493 1076050 cri.go:89] found id: ""
	I0127 15:42:02.689526 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.689537 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:02.689549 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:02.689578 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:02.746806 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:02.746848 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:02.761212 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:02.761243 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:02.841116 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:02.841135 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:02.841147 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:02.932117 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:02.932159 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:05.477139 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:05.491255 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:05.491337 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:05.527520 1076050 cri.go:89] found id: ""
	I0127 15:42:05.527551 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.527563 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:05.527572 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:05.527639 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:05.569699 1076050 cri.go:89] found id: ""
	I0127 15:42:05.569731 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.569743 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:05.569752 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:05.569825 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:05.607615 1076050 cri.go:89] found id: ""
	I0127 15:42:05.607654 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.607667 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:05.607677 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:05.607750 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:05.644591 1076050 cri.go:89] found id: ""
	I0127 15:42:05.644622 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.644634 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:05.644642 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:05.644693 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:05.684235 1076050 cri.go:89] found id: ""
	I0127 15:42:05.684258 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.684265 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:05.684272 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:05.684327 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:05.722858 1076050 cri.go:89] found id: ""
	I0127 15:42:05.722902 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.722914 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:05.722924 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:05.722989 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:05.759028 1076050 cri.go:89] found id: ""
	I0127 15:42:05.759062 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.759074 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:05.759082 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:05.759203 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:05.799551 1076050 cri.go:89] found id: ""
	I0127 15:42:05.799580 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.799592 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:05.799608 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:05.799624 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:05.859709 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:05.859763 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:05.873857 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:05.873893 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:05.950048 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:05.950080 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:05.950097 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:06.027916 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:06.027961 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:08.576361 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:08.591092 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:08.591172 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:08.629233 1076050 cri.go:89] found id: ""
	I0127 15:42:08.629262 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.629271 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:08.629277 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:08.629330 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:08.664138 1076050 cri.go:89] found id: ""
	I0127 15:42:08.664172 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.664183 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:08.664192 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:08.664254 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:08.702076 1076050 cri.go:89] found id: ""
	I0127 15:42:08.702113 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.702124 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:08.702132 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:08.702195 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:08.738780 1076050 cri.go:89] found id: ""
	I0127 15:42:08.738813 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.738823 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:08.738831 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:08.738904 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:08.773890 1076050 cri.go:89] found id: ""
	I0127 15:42:08.773922 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.773930 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:08.773936 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:08.773987 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:08.808430 1076050 cri.go:89] found id: ""
	I0127 15:42:08.808465 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.808477 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:08.808485 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:08.808553 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:08.844590 1076050 cri.go:89] found id: ""
	I0127 15:42:08.844615 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.844626 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:08.844634 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:08.844701 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:08.888333 1076050 cri.go:89] found id: ""
	I0127 15:42:08.888368 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.888377 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:08.888388 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:08.888420 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:08.941417 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:08.941453 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:08.956868 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:08.956942 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:09.049362 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:09.049390 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:09.049406 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:09.129215 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:09.129255 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:11.675550 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:11.690737 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:11.690808 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:11.727524 1076050 cri.go:89] found id: ""
	I0127 15:42:11.727554 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.727564 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:11.727572 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:11.727635 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:11.764046 1076050 cri.go:89] found id: ""
	I0127 15:42:11.764073 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.764082 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:11.764089 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:11.764142 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:11.799530 1076050 cri.go:89] found id: ""
	I0127 15:42:11.799562 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.799574 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:11.799582 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:11.799647 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:11.839880 1076050 cri.go:89] found id: ""
	I0127 15:42:11.839912 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.839921 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:11.839927 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:11.839989 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:11.876263 1076050 cri.go:89] found id: ""
	I0127 15:42:11.876313 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.876324 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:11.876332 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:11.876403 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:11.919106 1076050 cri.go:89] found id: ""
	I0127 15:42:11.919136 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.919144 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:11.919150 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:11.919209 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:11.957253 1076050 cri.go:89] found id: ""
	I0127 15:42:11.957285 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.957296 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:11.957304 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:11.957369 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:11.993481 1076050 cri.go:89] found id: ""
	I0127 15:42:11.993515 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.993527 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:11.993544 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:11.993560 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:12.063236 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:12.063264 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:12.063285 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:12.149889 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:12.149932 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:12.195704 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:12.195730 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:12.254422 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:12.254457 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:14.768483 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:14.782452 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:14.782539 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:14.822523 1076050 cri.go:89] found id: ""
	I0127 15:42:14.822558 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.822570 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:14.822576 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:14.822654 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:14.861058 1076050 cri.go:89] found id: ""
	I0127 15:42:14.861085 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.861094 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:14.861099 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:14.861164 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:14.898147 1076050 cri.go:89] found id: ""
	I0127 15:42:14.898178 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.898189 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:14.898199 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:14.898265 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:14.936269 1076050 cri.go:89] found id: ""
	I0127 15:42:14.936299 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.936307 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:14.936313 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:14.936378 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:14.971287 1076050 cri.go:89] found id: ""
	I0127 15:42:14.971320 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.971332 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:14.971341 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:14.971394 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:15.011649 1076050 cri.go:89] found id: ""
	I0127 15:42:15.011679 1076050 logs.go:282] 0 containers: []
	W0127 15:42:15.011687 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:15.011693 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:15.011744 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:15.047290 1076050 cri.go:89] found id: ""
	I0127 15:42:15.047329 1076050 logs.go:282] 0 containers: []
	W0127 15:42:15.047340 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:15.047349 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:15.047413 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:15.089625 1076050 cri.go:89] found id: ""
	I0127 15:42:15.089655 1076050 logs.go:282] 0 containers: []
	W0127 15:42:15.089667 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:15.089680 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:15.089694 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:15.136374 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:15.136410 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:15.195628 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:15.195676 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:15.213575 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:15.213679 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:15.293664 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:15.293694 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:15.293707 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:17.882520 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:17.896333 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:17.896403 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:17.935049 1076050 cri.go:89] found id: ""
	I0127 15:42:17.935078 1076050 logs.go:282] 0 containers: []
	W0127 15:42:17.935088 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:17.935096 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:17.935158 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:17.972911 1076050 cri.go:89] found id: ""
	I0127 15:42:17.972946 1076050 logs.go:282] 0 containers: []
	W0127 15:42:17.972958 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:17.972967 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:17.973073 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:18.017249 1076050 cri.go:89] found id: ""
	I0127 15:42:18.017276 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.017286 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:18.017292 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:18.017353 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:18.059963 1076050 cri.go:89] found id: ""
	I0127 15:42:18.059995 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.060007 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:18.060016 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:18.060086 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:18.106174 1076050 cri.go:89] found id: ""
	I0127 15:42:18.106219 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.106232 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:18.106248 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:18.106318 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:18.146130 1076050 cri.go:89] found id: ""
	I0127 15:42:18.146161 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.146176 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:18.146184 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:18.146256 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:18.184143 1076050 cri.go:89] found id: ""
	I0127 15:42:18.184176 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.184185 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:18.184191 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:18.184246 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:18.225042 1076050 cri.go:89] found id: ""
	I0127 15:42:18.225084 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.225096 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:18.225110 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:18.225127 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:18.263543 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:18.263577 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:18.321274 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:18.321323 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:18.336830 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:18.336861 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:18.420928 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:18.420955 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:18.420971 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:21.014731 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:21.030978 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:21.031048 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:21.071340 1076050 cri.go:89] found id: ""
	I0127 15:42:21.071370 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.071378 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:21.071385 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:21.071442 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:21.107955 1076050 cri.go:89] found id: ""
	I0127 15:42:21.107987 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.107999 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:21.108006 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:21.108073 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:21.148426 1076050 cri.go:89] found id: ""
	I0127 15:42:21.148465 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.148477 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:21.148488 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:21.148561 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:21.199228 1076050 cri.go:89] found id: ""
	I0127 15:42:21.199262 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.199273 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:21.199282 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:21.199353 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:21.259122 1076050 cri.go:89] found id: ""
	I0127 15:42:21.259156 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.259167 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:21.259175 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:21.259249 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:21.316242 1076050 cri.go:89] found id: ""
	I0127 15:42:21.316288 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.316300 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:21.316309 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:21.316378 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:21.360071 1076050 cri.go:89] found id: ""
	I0127 15:42:21.360104 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.360116 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:21.360125 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:21.360190 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:21.405056 1076050 cri.go:89] found id: ""
	I0127 15:42:21.405088 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.405099 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:21.405112 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:21.405129 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:21.419657 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:21.419688 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:21.495931 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:21.495957 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:21.495973 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:21.578029 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:21.578075 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:21.626705 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:21.626742 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:24.180267 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:24.193848 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:24.193927 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:24.232734 1076050 cri.go:89] found id: ""
	I0127 15:42:24.232767 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.232778 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:24.232787 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:24.232855 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:24.274373 1076050 cri.go:89] found id: ""
	I0127 15:42:24.274410 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.274421 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:24.274430 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:24.274486 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:24.314420 1076050 cri.go:89] found id: ""
	I0127 15:42:24.314449 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.314459 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:24.314469 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:24.314533 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:24.353247 1076050 cri.go:89] found id: ""
	I0127 15:42:24.353284 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.353302 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:24.353311 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:24.353380 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:24.395518 1076050 cri.go:89] found id: ""
	I0127 15:42:24.395545 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.395556 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:24.395564 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:24.395630 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:24.433954 1076050 cri.go:89] found id: ""
	I0127 15:42:24.433988 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.433999 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:24.434008 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:24.434078 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:24.475406 1076050 cri.go:89] found id: ""
	I0127 15:42:24.475438 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.475451 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:24.475460 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:24.475530 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:24.511024 1076050 cri.go:89] found id: ""
	I0127 15:42:24.511062 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.511074 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:24.511086 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:24.511105 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:24.585723 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:24.585746 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:24.585766 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:24.666956 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:24.666997 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:24.707929 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:24.707953 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:24.761870 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:24.761906 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:27.276721 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:27.292246 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:27.292341 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:27.332682 1076050 cri.go:89] found id: ""
	I0127 15:42:27.332715 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.332725 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:27.332733 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:27.332804 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:27.368942 1076050 cri.go:89] found id: ""
	I0127 15:42:27.368975 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.368988 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:27.368997 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:27.369083 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:27.406074 1076050 cri.go:89] found id: ""
	I0127 15:42:27.406116 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.406133 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:27.406141 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:27.406195 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:27.443019 1076050 cri.go:89] found id: ""
	I0127 15:42:27.443049 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.443061 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:27.443069 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:27.443136 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:27.478322 1076050 cri.go:89] found id: ""
	I0127 15:42:27.478359 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.478370 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:27.478380 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:27.478463 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:27.517749 1076050 cri.go:89] found id: ""
	I0127 15:42:27.517781 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.517793 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:27.517802 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:27.517868 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:27.556151 1076050 cri.go:89] found id: ""
	I0127 15:42:27.556182 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.556191 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:27.556197 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:27.556260 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:27.594607 1076050 cri.go:89] found id: ""
	I0127 15:42:27.594638 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.594646 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:27.594656 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:27.594666 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:27.675142 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:27.675184 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:27.719306 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:27.719341 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:27.771036 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:27.771076 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:27.785422 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:27.785451 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:27.863147 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:30.364006 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:30.378275 1076050 kubeadm.go:597] duration metric: took 4m3.244067669s to restartPrimaryControlPlane
	W0127 15:42:30.378392 1076050 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 15:42:30.378427 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:42:32.324859 1076050 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.946405854s)
	I0127 15:42:32.324949 1076050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:42:32.342099 1076050 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:42:32.353110 1076050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:42:32.365238 1076050 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:42:32.365259 1076050 kubeadm.go:157] found existing configuration files:
	
	I0127 15:42:32.365309 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:42:32.376623 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:42:32.376679 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:42:32.387533 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:42:32.397645 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:42:32.397706 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:42:32.409015 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:42:32.420172 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:42:32.420236 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:42:32.430688 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:42:32.441797 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:42:32.441856 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:42:32.452009 1076050 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:42:32.678031 1076050 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:44:29.249145 1076050 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 15:44:29.249258 1076050 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 15:44:29.250830 1076050 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 15:44:29.250891 1076050 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:44:29.251016 1076050 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:44:29.251168 1076050 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:44:29.251317 1076050 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 15:44:29.251390 1076050 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:44:29.253163 1076050 out.go:235]   - Generating certificates and keys ...
	I0127 15:44:29.253266 1076050 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:44:29.253389 1076050 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:44:29.253470 1076050 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:44:29.253522 1076050 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:44:29.253581 1076050 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:44:29.253626 1076050 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:44:29.253704 1076050 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:44:29.253772 1076050 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:44:29.253864 1076050 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:44:29.253956 1076050 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:44:29.254008 1076050 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:44:29.254112 1076050 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:44:29.254215 1076050 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:44:29.254305 1076050 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:44:29.254391 1076050 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:44:29.254466 1076050 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:44:29.254625 1076050 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:44:29.254763 1076050 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:44:29.254826 1076050 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:44:29.254989 1076050 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:44:29.256624 1076050 out.go:235]   - Booting up control plane ...
	I0127 15:44:29.256744 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:44:29.256829 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:44:29.256905 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:44:29.257025 1076050 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:44:29.257228 1076050 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 15:44:29.257290 1076050 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 15:44:29.257373 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.257657 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.257767 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.257963 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.258031 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.258254 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.258355 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.258591 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.258669 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.258862 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.258871 1076050 kubeadm.go:310] 
	I0127 15:44:29.258904 1076050 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 15:44:29.258972 1076050 kubeadm.go:310] 		timed out waiting for the condition
	I0127 15:44:29.258989 1076050 kubeadm.go:310] 
	I0127 15:44:29.259027 1076050 kubeadm.go:310] 	This error is likely caused by:
	I0127 15:44:29.259057 1076050 kubeadm.go:310] 		- The kubelet is not running
	I0127 15:44:29.259205 1076050 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 15:44:29.259221 1076050 kubeadm.go:310] 
	I0127 15:44:29.259358 1076050 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 15:44:29.259391 1076050 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 15:44:29.259444 1076050 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 15:44:29.259459 1076050 kubeadm.go:310] 
	I0127 15:44:29.259593 1076050 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 15:44:29.259701 1076050 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 15:44:29.259710 1076050 kubeadm.go:310] 
	I0127 15:44:29.259818 1076050 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 15:44:29.259940 1076050 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 15:44:29.260041 1076050 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 15:44:29.260150 1076050 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 15:44:29.260179 1076050 kubeadm.go:310] 
	W0127 15:44:29.260362 1076050 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 15:44:29.260421 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:44:29.751111 1076050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:44:29.767368 1076050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:44:29.778471 1076050 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:44:29.778498 1076050 kubeadm.go:157] found existing configuration files:
	
	I0127 15:44:29.778554 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:44:29.789258 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:44:29.789331 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:44:29.799796 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:44:29.809761 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:44:29.809824 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:44:29.819822 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:44:29.829277 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:44:29.829350 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:44:29.840607 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:44:29.850589 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:44:29.850656 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:44:29.860352 1076050 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:44:29.931615 1076050 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 15:44:29.931737 1076050 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:44:30.090907 1076050 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:44:30.091038 1076050 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:44:30.091180 1076050 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 15:44:30.288545 1076050 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:44:30.290548 1076050 out.go:235]   - Generating certificates and keys ...
	I0127 15:44:30.290678 1076050 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:44:30.290777 1076050 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:44:30.290899 1076050 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:44:30.290993 1076050 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:44:30.291119 1076050 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:44:30.291213 1076050 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:44:30.291312 1076050 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:44:30.291399 1076050 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:44:30.291523 1076050 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:44:30.291640 1076050 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:44:30.291718 1076050 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:44:30.291806 1076050 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:44:30.471428 1076050 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:44:30.705804 1076050 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:44:30.959802 1076050 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:44:31.149201 1076050 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:44:31.173695 1076050 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:44:31.174653 1076050 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:44:31.174752 1076050 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:44:31.342124 1076050 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:44:31.344077 1076050 out.go:235]   - Booting up control plane ...
	I0127 15:44:31.344184 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:44:31.348014 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:44:31.349159 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:44:31.349960 1076050 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:44:31.352168 1076050 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 15:45:11.354910 1076050 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 15:45:11.355380 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:45:11.355582 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:45:16.356239 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:45:16.356487 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:45:26.357276 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:45:26.357605 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:45:46.358046 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:45:46.358293 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:46:26.356549 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:46:26.356813 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:46:26.356830 1076050 kubeadm.go:310] 
	I0127 15:46:26.356897 1076050 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 15:46:26.356938 1076050 kubeadm.go:310] 		timed out waiting for the condition
	I0127 15:46:26.356949 1076050 kubeadm.go:310] 
	I0127 15:46:26.357026 1076050 kubeadm.go:310] 	This error is likely caused by:
	I0127 15:46:26.357106 1076050 kubeadm.go:310] 		- The kubelet is not running
	I0127 15:46:26.357302 1076050 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 15:46:26.357336 1076050 kubeadm.go:310] 
	I0127 15:46:26.357498 1076050 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 15:46:26.357548 1076050 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 15:46:26.357607 1076050 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 15:46:26.357624 1076050 kubeadm.go:310] 
	I0127 15:46:26.357766 1076050 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 15:46:26.357862 1076050 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 15:46:26.357878 1076050 kubeadm.go:310] 
	I0127 15:46:26.358043 1076050 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 15:46:26.358166 1076050 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 15:46:26.358290 1076050 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 15:46:26.358368 1076050 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 15:46:26.358379 1076050 kubeadm.go:310] 
	I0127 15:46:26.358971 1076050 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:46:26.359102 1076050 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 15:46:26.359219 1076050 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 15:46:26.359281 1076050 kubeadm.go:394] duration metric: took 7m59.27977519s to StartCluster
	I0127 15:46:26.359443 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:46:26.359522 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:46:26.408713 1076050 cri.go:89] found id: ""
	I0127 15:46:26.408752 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.408764 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:46:26.408772 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:46:26.408832 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:46:26.449156 1076050 cri.go:89] found id: ""
	I0127 15:46:26.449190 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.449200 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:46:26.449208 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:46:26.449306 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:46:26.487786 1076050 cri.go:89] found id: ""
	I0127 15:46:26.487812 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.487820 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:46:26.487827 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:46:26.487876 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:46:26.546745 1076050 cri.go:89] found id: ""
	I0127 15:46:26.546772 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.546782 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:46:26.546791 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:46:26.546855 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:46:26.584262 1076050 cri.go:89] found id: ""
	I0127 15:46:26.584300 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.584308 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:46:26.584316 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:46:26.584385 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:46:26.622575 1076050 cri.go:89] found id: ""
	I0127 15:46:26.622608 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.622617 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:46:26.622623 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:46:26.622683 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:46:26.660928 1076050 cri.go:89] found id: ""
	I0127 15:46:26.660955 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.660964 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:46:26.660970 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:46:26.661062 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:46:26.698084 1076050 cri.go:89] found id: ""
	I0127 15:46:26.698116 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.698125 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:46:26.698139 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:46:26.698151 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:46:26.742459 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:46:26.742486 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:46:26.797935 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:46:26.797977 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:46:26.814213 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:46:26.814248 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:46:26.903335 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:46:26.903373 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:46:26.903392 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0127 15:46:27.016392 1076050 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 15:46:27.016470 1076050 out.go:270] * 
	W0127 15:46:27.016547 1076050 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 15:46:27.016561 1076050 out.go:270] * 
	W0127 15:46:27.017322 1076050 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 15:46:27.020682 1076050 out.go:201] 
	W0127 15:46:27.022217 1076050 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 15:46:27.022269 1076050 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 15:46:27.022288 1076050 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 15:46:27.023966 1076050 out.go:201] 
	
	
	==> CRI-O <==
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.409590432Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737992788409567890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c5dafc1-a316-4fb9-ae93-9f88660d475b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.410102852Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0441cd5f-397d-457f-8964-136d8916f4d9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.410157230Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0441cd5f-397d-457f-8964-136d8916f4d9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.410189937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0441cd5f-397d-457f-8964-136d8916f4d9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.446452089Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e4866724-2b8b-4cfd-a77b-1694f96d12c8 name=/runtime.v1.RuntimeService/Version
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.446527384Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e4866724-2b8b-4cfd-a77b-1694f96d12c8 name=/runtime.v1.RuntimeService/Version
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.447854048Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b2700b6-14b7-4fb4-9650-93fcac041461 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.448270614Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737992788448245344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b2700b6-14b7-4fb4-9650-93fcac041461 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.448830946Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6028170c-1a77-4afe-81dc-736be65fd97b name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.448878761Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6028170c-1a77-4afe-81dc-736be65fd97b name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.448920959Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6028170c-1a77-4afe-81dc-736be65fd97b name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.486536987Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ddd95ab1-90f1-44a7-a15f-a91775cf8b6b name=/runtime.v1.RuntimeService/Version
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.486607663Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ddd95ab1-90f1-44a7-a15f-a91775cf8b6b name=/runtime.v1.RuntimeService/Version
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.488028521Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fccff0f3-9aa8-4d4e-9929-3ff9dfa5d262 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.488475652Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737992788488449506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fccff0f3-9aa8-4d4e-9929-3ff9dfa5d262 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.489014829Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37ea8809-0170-4d72-8756-1dee27c4e3df name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.489076659Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37ea8809-0170-4d72-8756-1dee27c4e3df name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.489116493Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=37ea8809-0170-4d72-8756-1dee27c4e3df name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.526024763Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a73dddfb-84e4-4040-9c88-0f50bd05709a name=/runtime.v1.RuntimeService/Version
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.526150354Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a73dddfb-84e4-4040-9c88-0f50bd05709a name=/runtime.v1.RuntimeService/Version
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.527998911Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b0efed41-5daa-4334-aa57-940c92be46fa name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.528613401Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737992788528583587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0efed41-5daa-4334-aa57-940c92be46fa name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.529215815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a8c46819-20d8-40dd-8ebb-6b788ad0fc20 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.529268033Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a8c46819-20d8-40dd-8ebb-6b788ad0fc20 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:46:28 old-k8s-version-405706 crio[634]: time="2025-01-27 15:46:28.529306725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a8c46819-20d8-40dd-8ebb-6b788ad0fc20 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan27 15:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054128] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043515] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.175374] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.998732] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.641220] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.061271] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.065012] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073970] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.202651] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.132479] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.248883] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +6.567266] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.063012] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.058094] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[ +12.932312] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 15:42] systemd-fstab-generator[5003]: Ignoring "noauto" option for root device
	[Jan27 15:44] systemd-fstab-generator[5276]: Ignoring "noauto" option for root device
	[  +0.074147] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 15:46:28 up 8 min,  0 users,  load average: 0.17, 0.14, 0.08
	Linux old-k8s-version-405706 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 27 15:46:26 old-k8s-version-405706 kubelet[5449]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001fc120, 0x0, 0x0)
	Jan 27 15:46:26 old-k8s-version-405706 kubelet[5449]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jan 27 15:46:26 old-k8s-version-405706 kubelet[5449]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0001b7180)
	Jan 27 15:46:26 old-k8s-version-405706 kubelet[5449]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jan 27 15:46:26 old-k8s-version-405706 kubelet[5449]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jan 27 15:46:26 old-k8s-version-405706 kubelet[5449]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jan 27 15:46:26 old-k8s-version-405706 kubelet[5449]: goroutine 148 [select]:
	Jan 27 15:46:26 old-k8s-version-405706 kubelet[5449]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0004cf4f0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Jan 27 15:46:26 old-k8s-version-405706 kubelet[5449]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jan 27 15:46:26 old-k8s-version-405706 kubelet[5449]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001fc720, 0x0, 0x0)
	Jan 27 15:46:26 old-k8s-version-405706 kubelet[5449]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jan 27 15:46:26 old-k8s-version-405706 kubelet[5449]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0001b7340)
	Jan 27 15:46:26 old-k8s-version-405706 kubelet[5449]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jan 27 15:46:26 old-k8s-version-405706 kubelet[5449]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jan 27 15:46:26 old-k8s-version-405706 kubelet[5449]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jan 27 15:46:26 old-k8s-version-405706 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 27 15:46:26 old-k8s-version-405706 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 27 15:46:27 old-k8s-version-405706 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jan 27 15:46:27 old-k8s-version-405706 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 27 15:46:27 old-k8s-version-405706 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 27 15:46:27 old-k8s-version-405706 kubelet[5517]: I0127 15:46:27.268761    5517 server.go:416] Version: v1.20.0
	Jan 27 15:46:27 old-k8s-version-405706 kubelet[5517]: I0127 15:46:27.269129    5517 server.go:837] Client rotation is on, will bootstrap in background
	Jan 27 15:46:27 old-k8s-version-405706 kubelet[5517]: I0127 15:46:27.272876    5517 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 27 15:46:27 old-k8s-version-405706 kubelet[5517]: W0127 15:46:27.274626    5517 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 27 15:46:27 old-k8s-version-405706 kubelet[5517]: I0127 15:46:27.274580    5517 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405706 -n old-k8s-version-405706
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405706 -n old-k8s-version-405706: exit status 2 (261.881106ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-405706" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (511.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:46:46.261175 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:47:07.512964 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:47:09.311930 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:47:45.219868 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:48:16.948048 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:49:06.239255 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:49:08.283404 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:49:32.465226 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:50:16.985363 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:50:38.928060 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:50:55.531190 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:51:08.726674 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:51:40.048959 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:51:46.261786 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:52:01.993067 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:52:07.513040 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:52:31.791222 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:52:45.219810 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:53:09.327308 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:53:16.947980 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:53:30.579074 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:54:06.238580 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:54:32.465226 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:55:16.985211 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405706 -n old-k8s-version-405706
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405706 -n old-k8s-version-405706: exit status 2 (259.513703ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-405706" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405706 -n old-k8s-version-405706
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405706 -n old-k8s-version-405706: exit status 2 (244.954494ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-405706 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-405706 logs -n 25: (1.149560556s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-230388 sudo cat                              | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-230388 sudo                                  | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-230388 sudo                                  | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-230388 sudo                                  | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-230388 sudo find                             | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-230388 sudo crio                             | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-230388                                       | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-147179 | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | disable-driver-mounts-147179                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:33 UTC |
	|         | default-k8s-diff-port-912913                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-458006             | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-458006                                   | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-349782            | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-349782                                  | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:35 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-912913  | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC | 27 Jan 25 15:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC | 27 Jan 25 15:35 UTC |
	|         | default-k8s-diff-port-912913                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-458006                  | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC | 27 Jan 25 15:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-458006                                   | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-349782                 | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC | 27 Jan 25 15:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-349782                                  | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-912913       | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC | 27 Jan 25 15:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC |                     |
	|         | default-k8s-diff-port-912913                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-405706        | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:36 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-405706                              | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:37 UTC | 27 Jan 25 15:37 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-405706             | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:37 UTC | 27 Jan 25 15:37 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-405706                              | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 15:37:58
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 15:37:58.460225 1076050 out.go:345] Setting OutFile to fd 1 ...
	I0127 15:37:58.460642 1076050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:37:58.460654 1076050 out.go:358] Setting ErrFile to fd 2...
	I0127 15:37:58.460661 1076050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:37:58.461077 1076050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 15:37:58.462086 1076050 out.go:352] Setting JSON to false
	I0127 15:37:58.463486 1076050 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":22825,"bootTime":1737969453,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 15:37:58.463630 1076050 start.go:139] virtualization: kvm guest
	I0127 15:37:58.465774 1076050 out.go:177] * [old-k8s-version-405706] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 15:37:58.467019 1076050 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 15:37:58.467027 1076050 notify.go:220] Checking for updates...
	I0127 15:37:58.469366 1076050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 15:37:58.470862 1076050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:37:58.472239 1076050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:37:58.473602 1076050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 15:37:58.474992 1076050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 15:37:58.477098 1076050 config.go:182] Loaded profile config "old-k8s-version-405706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 15:37:58.477731 1076050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:37:58.477799 1076050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:37:58.494965 1076050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44031
	I0127 15:37:58.495385 1076050 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:37:58.495879 1076050 main.go:141] libmachine: Using API Version  1
	I0127 15:37:58.495901 1076050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:37:58.496287 1076050 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:37:58.496581 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:37:58.498539 1076050 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 15:37:58.499766 1076050 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 15:37:58.500092 1076050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:37:58.500132 1076050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:37:58.516530 1076050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38979
	I0127 15:37:58.517083 1076050 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:37:58.517634 1076050 main.go:141] libmachine: Using API Version  1
	I0127 15:37:58.517666 1076050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:37:58.518105 1076050 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:37:58.518356 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:37:58.558744 1076050 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 15:37:58.560294 1076050 start.go:297] selected driver: kvm2
	I0127 15:37:58.560309 1076050 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-4
05706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:37:58.560451 1076050 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 15:37:58.561175 1076050 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:37:58.561284 1076050 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 15:37:58.579056 1076050 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 15:37:58.579656 1076050 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 15:37:58.579710 1076050 cni.go:84] Creating CNI manager for ""
	I0127 15:37:58.579776 1076050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:37:58.579842 1076050 start.go:340] cluster config:
	{Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:37:58.580020 1076050 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:37:58.581716 1076050 out.go:177] * Starting "old-k8s-version-405706" primary control-plane node in "old-k8s-version-405706" cluster
	I0127 15:37:58.582897 1076050 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 15:37:58.582967 1076050 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 15:37:58.582980 1076050 cache.go:56] Caching tarball of preloaded images
	I0127 15:37:58.583091 1076050 preload.go:172] Found /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 15:37:58.583107 1076050 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 15:37:58.583235 1076050 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/config.json ...
	I0127 15:37:58.583561 1076050 start.go:360] acquireMachinesLock for old-k8s-version-405706: {Name:mk884e36253ca066a698970989f20649e5f9cbef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 15:37:58.583628 1076050 start.go:364] duration metric: took 38.743µs to acquireMachinesLock for "old-k8s-version-405706"
	I0127 15:37:58.583652 1076050 start.go:96] Skipping create...Using existing machine configuration
	I0127 15:37:58.583664 1076050 fix.go:54] fixHost starting: 
	I0127 15:37:58.584041 1076050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:37:58.584088 1076050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:37:58.599995 1076050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I0127 15:37:58.600476 1076050 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:37:58.600955 1076050 main.go:141] libmachine: Using API Version  1
	I0127 15:37:58.600978 1076050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:37:58.601364 1076050 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:37:58.601600 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:37:58.601761 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetState
	I0127 15:37:58.603539 1076050 fix.go:112] recreateIfNeeded on old-k8s-version-405706: state=Stopped err=<nil>
	I0127 15:37:58.603586 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	W0127 15:37:58.603763 1076050 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 15:37:58.606243 1076050 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-405706" ...
	I0127 15:37:54.081369 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:56.581569 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:58.582848 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:59.787393 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:01.789117 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:58.529695 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:01.029818 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:58.607570 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .Start
	I0127 15:37:58.607751 1076050 main.go:141] libmachine: (old-k8s-version-405706) starting domain...
	I0127 15:37:58.607775 1076050 main.go:141] libmachine: (old-k8s-version-405706) ensuring networks are active...
	I0127 15:37:58.608545 1076050 main.go:141] libmachine: (old-k8s-version-405706) Ensuring network default is active
	I0127 15:37:58.608940 1076050 main.go:141] libmachine: (old-k8s-version-405706) Ensuring network mk-old-k8s-version-405706 is active
	I0127 15:37:58.609360 1076050 main.go:141] libmachine: (old-k8s-version-405706) getting domain XML...
	I0127 15:37:58.610094 1076050 main.go:141] libmachine: (old-k8s-version-405706) creating domain...
	I0127 15:37:59.916140 1076050 main.go:141] libmachine: (old-k8s-version-405706) waiting for IP...
	I0127 15:37:59.917074 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:37:59.917644 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:37:59.917771 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:37:59.917639 1076085 retry.go:31] will retry after 260.191068ms: waiting for domain to come up
	I0127 15:38:00.180221 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:00.180922 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:00.180948 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:00.180879 1076085 retry.go:31] will retry after 359.566395ms: waiting for domain to come up
	I0127 15:38:00.542429 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:00.543056 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:00.543097 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:00.542942 1076085 retry.go:31] will retry after 454.555688ms: waiting for domain to come up
	I0127 15:38:00.999387 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:00.999926 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:00.999963 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:00.999888 1076085 retry.go:31] will retry after 559.246215ms: waiting for domain to come up
	I0127 15:38:01.560836 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:01.561528 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:01.561554 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:01.561489 1076085 retry.go:31] will retry after 552.626147ms: waiting for domain to come up
	I0127 15:38:02.116418 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:02.116873 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:02.116914 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:02.116852 1076085 retry.go:31] will retry after 808.293412ms: waiting for domain to come up
	I0127 15:38:02.927177 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:02.927742 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:02.927794 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:02.927707 1076085 retry.go:31] will retry after 740.958034ms: waiting for domain to come up
	I0127 15:38:00.583568 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:03.081418 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:04.290371 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:06.787711 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:03.529199 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:05.530455 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:03.670221 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:03.670746 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:03.670778 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:03.670698 1076085 retry.go:31] will retry after 1.365040284s: waiting for domain to come up
	I0127 15:38:05.038371 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:05.039049 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:05.039084 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:05.039001 1076085 retry.go:31] will retry after 1.410803026s: waiting for domain to come up
	I0127 15:38:06.451661 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:06.452329 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:06.452353 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:06.452303 1076085 retry.go:31] will retry after 1.899894945s: waiting for domain to come up
	I0127 15:38:08.354209 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:08.354816 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:08.354843 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:08.354774 1076085 retry.go:31] will retry after 2.020609979s: waiting for domain to come up
	I0127 15:38:05.581452 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:07.587869 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:08.788730 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:11.289383 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:07.534482 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:10.029370 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:10.377713 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:10.378246 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:10.378288 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:10.378203 1076085 retry.go:31] will retry after 2.469378968s: waiting for domain to come up
	I0127 15:38:12.850116 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:12.850624 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:12.850678 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:12.850598 1076085 retry.go:31] will retry after 4.322374162s: waiting for domain to come up
	I0127 15:38:10.085186 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:12.580963 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:13.788914 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:16.287163 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:12.528917 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:14.531412 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:17.028589 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:17.175528 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.176129 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has current primary IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.176161 1076050 main.go:141] libmachine: (old-k8s-version-405706) found domain IP: 192.168.72.49
	I0127 15:38:17.176174 1076050 main.go:141] libmachine: (old-k8s-version-405706) reserving static IP address...
	I0127 15:38:17.176643 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "old-k8s-version-405706", mac: "52:54:00:c3:d6:50", ip: "192.168.72.49"} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.176678 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | skip adding static IP to network mk-old-k8s-version-405706 - found existing host DHCP lease matching {name: "old-k8s-version-405706", mac: "52:54:00:c3:d6:50", ip: "192.168.72.49"}
	I0127 15:38:17.176696 1076050 main.go:141] libmachine: (old-k8s-version-405706) reserved static IP address 192.168.72.49 for domain old-k8s-version-405706
	I0127 15:38:17.176711 1076050 main.go:141] libmachine: (old-k8s-version-405706) waiting for SSH...
	I0127 15:38:17.176725 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | Getting to WaitForSSH function...
	I0127 15:38:17.179302 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.179688 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.179730 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.179875 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | Using SSH client type: external
	I0127 15:38:17.179902 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | Using SSH private key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa (-rw-------)
	I0127 15:38:17.179949 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 15:38:17.179964 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | About to run SSH command:
	I0127 15:38:17.179977 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | exit 0
	I0127 15:38:17.309257 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | SSH cmd err, output: <nil>: 
	I0127 15:38:17.309663 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetConfigRaw
	I0127 15:38:17.310369 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:38:17.313129 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.313573 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.313604 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.313898 1076050 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/config.json ...
	I0127 15:38:17.314149 1076050 machine.go:93] provisionDockerMachine start ...
	I0127 15:38:17.314178 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:17.314424 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.317176 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.317563 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.317591 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.317822 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:17.318108 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.318299 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.318460 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:17.318635 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:17.318853 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:17.318864 1076050 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 15:38:17.433866 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 15:38:17.433903 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetMachineName
	I0127 15:38:17.434143 1076050 buildroot.go:166] provisioning hostname "old-k8s-version-405706"
	I0127 15:38:17.434203 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetMachineName
	I0127 15:38:17.434415 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.437023 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.437426 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.437473 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.437592 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:17.437754 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.437908 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.438061 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:17.438217 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:17.438406 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:17.438418 1076050 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-405706 && echo "old-k8s-version-405706" | sudo tee /etc/hostname
	I0127 15:38:17.569398 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-405706
	
	I0127 15:38:17.569429 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.572466 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.572839 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.572882 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.573066 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:17.573312 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.573557 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.573726 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:17.573924 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:17.574106 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:17.574123 1076050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-405706' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-405706/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-405706' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 15:38:17.705253 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 15:38:17.705300 1076050 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20321-1005652/.minikube CaCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20321-1005652/.minikube}
	I0127 15:38:17.705320 1076050 buildroot.go:174] setting up certificates
	I0127 15:38:17.705333 1076050 provision.go:84] configureAuth start
	I0127 15:38:17.705346 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetMachineName
	I0127 15:38:17.705683 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:38:17.708834 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.709332 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.709361 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.709583 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.712195 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.712714 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.712755 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.712924 1076050 provision.go:143] copyHostCerts
	I0127 15:38:17.712990 1076050 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem, removing ...
	I0127 15:38:17.713017 1076050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem
	I0127 15:38:17.713095 1076050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem (1078 bytes)
	I0127 15:38:17.713241 1076050 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem, removing ...
	I0127 15:38:17.713259 1076050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem
	I0127 15:38:17.713326 1076050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem (1123 bytes)
	I0127 15:38:17.713446 1076050 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem, removing ...
	I0127 15:38:17.713460 1076050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem
	I0127 15:38:17.713500 1076050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem (1679 bytes)
	I0127 15:38:17.713572 1076050 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-405706 san=[127.0.0.1 192.168.72.49 localhost minikube old-k8s-version-405706]
	I0127 15:38:17.976673 1076050 provision.go:177] copyRemoteCerts
	I0127 15:38:17.976750 1076050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 15:38:17.976777 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.979513 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.979876 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.979909 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.980065 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:17.980267 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.980415 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:17.980554 1076050 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:38:18.068921 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 15:38:18.098428 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 15:38:18.126079 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 15:38:18.152193 1076050 provision.go:87] duration metric: took 446.842204ms to configureAuth
	I0127 15:38:18.152233 1076050 buildroot.go:189] setting minikube options for container-runtime
	I0127 15:38:18.152508 1076050 config.go:182] Loaded profile config "old-k8s-version-405706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 15:38:18.152613 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.155796 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.156222 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.156254 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.156368 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.156577 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.156774 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.156938 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.157163 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:18.157375 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:18.157392 1076050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 15:38:18.414989 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 15:38:18.415023 1076050 machine.go:96] duration metric: took 1.100855468s to provisionDockerMachine
	I0127 15:38:18.415039 1076050 start.go:293] postStartSetup for "old-k8s-version-405706" (driver="kvm2")
	I0127 15:38:18.415054 1076050 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 15:38:18.415078 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.415462 1076050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 15:38:18.415499 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.418353 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.418778 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.418818 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.418925 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.419129 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.419322 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.419440 1076050 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:38:14.581198 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:16.581669 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:18.508389 1076050 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 15:38:18.513026 1076050 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 15:38:18.513065 1076050 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/addons for local assets ...
	I0127 15:38:18.513137 1076050 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/files for local assets ...
	I0127 15:38:18.513210 1076050 filesync.go:149] local asset: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem -> 10128162.pem in /etc/ssl/certs
	I0127 15:38:18.513309 1076050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 15:38:18.523553 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:38:18.550472 1076050 start.go:296] duration metric: took 135.415525ms for postStartSetup
	I0127 15:38:18.550553 1076050 fix.go:56] duration metric: took 19.966860382s for fixHost
	I0127 15:38:18.550584 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.553490 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.553896 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.553956 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.554089 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.554297 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.554458 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.554585 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.554806 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:18.555042 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:18.555058 1076050 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 15:38:18.670326 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737992298.641469796
	
	I0127 15:38:18.670351 1076050 fix.go:216] guest clock: 1737992298.641469796
	I0127 15:38:18.670358 1076050 fix.go:229] Guest: 2025-01-27 15:38:18.641469796 +0000 UTC Remote: 2025-01-27 15:38:18.550560739 +0000 UTC m=+20.130793423 (delta=90.909057ms)
	I0127 15:38:18.670379 1076050 fix.go:200] guest clock delta is within tolerance: 90.909057ms
	I0127 15:38:18.670384 1076050 start.go:83] releasing machines lock for "old-k8s-version-405706", held for 20.08674208s
	I0127 15:38:18.670400 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.670689 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:38:18.673557 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.673931 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.673967 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.674112 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.674583 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.674751 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.674869 1076050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 15:38:18.674916 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.674944 1076050 ssh_runner.go:195] Run: cat /version.json
	I0127 15:38:18.674975 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.677875 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.678255 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.678395 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.678427 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.678595 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.678749 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.678783 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.678819 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.679001 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.679093 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.679181 1076050 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:38:18.679243 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.681217 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.681729 1076050 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:38:18.787808 1076050 ssh_runner.go:195] Run: systemctl --version
	I0127 15:38:18.794834 1076050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 15:38:18.943494 1076050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 15:38:18.950152 1076050 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 15:38:18.950269 1076050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 15:38:18.967110 1076050 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 15:38:18.967141 1076050 start.go:495] detecting cgroup driver to use...
	I0127 15:38:18.967215 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 15:38:18.985631 1076050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 15:38:19.002007 1076050 docker.go:217] disabling cri-docker service (if available) ...
	I0127 15:38:19.002098 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 15:38:19.015975 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 15:38:19.030630 1076050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 15:38:19.167900 1076050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 15:38:19.339595 1076050 docker.go:233] disabling docker service ...
	I0127 15:38:19.339680 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 15:38:19.355894 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 15:38:19.370010 1076050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 15:38:19.503289 1076050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 15:38:19.640006 1076050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 15:38:19.656134 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 15:38:19.676136 1076050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 15:38:19.676207 1076050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:38:19.688127 1076050 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 15:38:19.688235 1076050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:38:19.700866 1076050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:38:19.712387 1076050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:38:19.724833 1076050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 15:38:19.736825 1076050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 15:38:19.747906 1076050 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 15:38:19.747976 1076050 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 15:38:19.761744 1076050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 15:38:19.771558 1076050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:38:19.891616 1076050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 15:38:19.987396 1076050 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 15:38:19.987496 1076050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 15:38:19.993148 1076050 start.go:563] Will wait 60s for crictl version
	I0127 15:38:19.993218 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:19.997232 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 15:38:20.047289 1076050 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 15:38:20.047381 1076050 ssh_runner.go:195] Run: crio --version
	I0127 15:38:20.080844 1076050 ssh_runner.go:195] Run: crio --version
	I0127 15:38:20.113498 1076050 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 15:38:18.287782 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:20.288830 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:19.029508 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:21.031738 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:20.115011 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:38:20.118087 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:20.118526 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:20.118554 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:20.118911 1076050 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 15:38:20.123918 1076050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:38:20.137420 1076050 kubeadm.go:883] updating cluster {Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 15:38:20.137608 1076050 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 15:38:20.137679 1076050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:38:20.203088 1076050 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 15:38:20.203162 1076050 ssh_runner.go:195] Run: which lz4
	I0127 15:38:20.207834 1076050 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 15:38:20.212511 1076050 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 15:38:20.212550 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 15:38:21.944361 1076050 crio.go:462] duration metric: took 1.736570115s to copy over tarball
	I0127 15:38:21.944459 1076050 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 15:38:19.082119 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:21.583597 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:22.786853 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:24.787379 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:26.788848 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:23.529051 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:25.530450 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:25.017812 1076050 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.073312095s)
	I0127 15:38:25.017848 1076050 crio.go:469] duration metric: took 3.07344607s to extract the tarball
	I0127 15:38:25.017859 1076050 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 15:38:25.068609 1076050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:38:25.107660 1076050 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 15:38:25.107705 1076050 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 15:38:25.107797 1076050 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.107831 1076050 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.107843 1076050 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 15:38:25.107782 1076050 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:38:25.107866 1076050 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.107793 1076050 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.107810 1076050 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.107872 1076050 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.109711 1076050 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.109716 1076050 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.109736 1076050 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:38:25.109749 1076050 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 15:38:25.109765 1076050 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.109711 1076050 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.109717 1076050 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.109721 1076050 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.319866 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 15:38:25.320854 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.329418 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.331454 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.331999 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.338125 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.346119 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.438398 1076050 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 15:38:25.438508 1076050 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 15:38:25.438596 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.485875 1076050 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 15:38:25.485939 1076050 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.486002 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.524177 1076050 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 15:38:25.524230 1076050 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.524284 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.533972 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:38:25.537150 1076050 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 15:38:25.537198 1076050 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.537239 1076050 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 15:38:25.537282 1076050 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.537306 1076050 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 15:38:25.537329 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.537256 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.537388 1076050 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 15:38:25.537334 1076050 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.537413 1076050 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.537430 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.537437 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 15:38:25.537438 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.537484 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.537505 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.730245 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.730334 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.730438 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.730438 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 15:38:25.730510 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.730615 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.730667 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.896539 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.896835 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.896864 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 15:38:25.896869 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.896952 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.896990 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.897080 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:26.067159 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 15:38:26.067203 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:26.067293 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 15:38:26.078064 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:26.078128 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 15:38:26.078233 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:26.078345 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 15:38:26.172870 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 15:38:26.172975 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 15:38:26.177848 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 15:38:26.177943 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 15:38:26.177981 1076050 cache_images.go:92] duration metric: took 1.070258879s to LoadCachedImages
	W0127 15:38:26.178068 1076050 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0127 15:38:26.178082 1076050 kubeadm.go:934] updating node { 192.168.72.49 8443 v1.20.0 crio true true} ...
	I0127 15:38:26.178211 1076050 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-405706 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 15:38:26.178294 1076050 ssh_runner.go:195] Run: crio config
	I0127 15:38:26.228357 1076050 cni.go:84] Creating CNI manager for ""
	I0127 15:38:26.228379 1076050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:38:26.228388 1076050 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 15:38:26.228409 1076050 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.49 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-405706 NodeName:old-k8s-version-405706 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 15:38:26.228568 1076050 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-405706"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 15:38:26.228657 1076050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 15:38:26.240731 1076050 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 15:38:26.240809 1076050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 15:38:26.251662 1076050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0127 15:38:26.270153 1076050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 15:38:26.292045 1076050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0127 15:38:26.312171 1076050 ssh_runner.go:195] Run: grep 192.168.72.49	control-plane.minikube.internal$ /etc/hosts
	I0127 15:38:26.316436 1076050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:38:26.330437 1076050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:38:26.453879 1076050 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:38:26.473364 1076050 certs.go:68] Setting up /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706 for IP: 192.168.72.49
	I0127 15:38:26.473395 1076050 certs.go:194] generating shared ca certs ...
	I0127 15:38:26.473419 1076050 certs.go:226] acquiring lock for ca certs: {Name:mk0f815247e4bef2bde3ddcae95639d1cd9cab24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:38:26.473672 1076050 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key
	I0127 15:38:26.473739 1076050 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key
	I0127 15:38:26.473755 1076050 certs.go:256] generating profile certs ...
	I0127 15:38:26.473909 1076050 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.key
	I0127 15:38:26.473993 1076050 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.key.8816e362
	I0127 15:38:26.474047 1076050 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.key
	I0127 15:38:26.474215 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem (1338 bytes)
	W0127 15:38:26.474262 1076050 certs.go:480] ignoring /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816_empty.pem, impossibly tiny 0 bytes
	I0127 15:38:26.474272 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 15:38:26.474304 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem (1078 bytes)
	I0127 15:38:26.474335 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem (1123 bytes)
	I0127 15:38:26.474377 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem (1679 bytes)
	I0127 15:38:26.474434 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:38:26.475310 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 15:38:26.528151 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 15:38:26.569116 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 15:38:26.612791 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 15:38:26.643362 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 15:38:26.682611 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 15:38:26.736411 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 15:38:26.766171 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 15:38:26.806820 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /usr/share/ca-certificates/10128162.pem (1708 bytes)
	I0127 15:38:26.835935 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 15:38:26.862752 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem --> /usr/share/ca-certificates/1012816.pem (1338 bytes)
	I0127 15:38:26.890713 1076050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 15:38:26.910713 1076050 ssh_runner.go:195] Run: openssl version
	I0127 15:38:26.917762 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1012816.pem && ln -fs /usr/share/ca-certificates/1012816.pem /etc/ssl/certs/1012816.pem"
	I0127 15:38:26.930093 1076050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1012816.pem
	I0127 15:38:26.935103 1076050 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 14:20 /usr/share/ca-certificates/1012816.pem
	I0127 15:38:26.935187 1076050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1012816.pem
	I0127 15:38:26.941655 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1012816.pem /etc/ssl/certs/51391683.0"
	I0127 15:38:26.955281 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10128162.pem && ln -fs /usr/share/ca-certificates/10128162.pem /etc/ssl/certs/10128162.pem"
	I0127 15:38:26.969095 1076050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10128162.pem
	I0127 15:38:26.974104 1076050 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 14:20 /usr/share/ca-certificates/10128162.pem
	I0127 15:38:26.974177 1076050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10128162.pem
	I0127 15:38:26.980428 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10128162.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 15:38:26.992636 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 15:38:27.006632 1076050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:38:27.011797 1076050 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 14:06 /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:38:27.011873 1076050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:38:27.018384 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 15:38:27.032120 1076050 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 15:38:27.037441 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 15:38:27.044020 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 15:38:27.050856 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 15:38:27.057896 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 15:38:27.065183 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 15:38:27.072632 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 15:38:27.079504 1076050 kubeadm.go:392] StartCluster: {Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:38:27.079605 1076050 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 15:38:27.079670 1076050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:38:27.122961 1076050 cri.go:89] found id: ""
	I0127 15:38:27.123034 1076050 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 15:38:27.134170 1076050 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 15:38:27.134194 1076050 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 15:38:27.134254 1076050 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 15:38:27.146526 1076050 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 15:38:27.147269 1076050 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-405706" does not appear in /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:38:27.147608 1076050 kubeconfig.go:62] /home/jenkins/minikube-integration/20321-1005652/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-405706" cluster setting kubeconfig missing "old-k8s-version-405706" context setting]
	I0127 15:38:27.148175 1076050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:38:27.218301 1076050 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 15:38:27.230797 1076050 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.49
	I0127 15:38:27.230842 1076050 kubeadm.go:1160] stopping kube-system containers ...
	I0127 15:38:27.230858 1076050 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 15:38:27.230918 1076050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:38:27.273845 1076050 cri.go:89] found id: ""
	I0127 15:38:27.273935 1076050 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 15:38:27.295864 1076050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:38:27.308596 1076050 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:38:27.308616 1076050 kubeadm.go:157] found existing configuration files:
	
	I0127 15:38:27.308663 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:38:27.319955 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:38:27.320015 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:38:27.331528 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:38:27.342177 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:38:27.342248 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:38:27.352666 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:38:27.364010 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:38:27.364077 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:38:27.375886 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:38:27.386069 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:38:27.386141 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:38:27.398977 1076050 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:38:27.410085 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:27.579462 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:28.350228 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:24.081574 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:26.084881 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:28.581361 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:29.287085 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:31.288269 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:28.030083 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:30.030174 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:28.604472 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:28.715137 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:28.812566 1076050 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:38:28.812663 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:29.312952 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:29.812784 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:30.313395 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:30.813525 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:31.313773 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:31.813137 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:32.313501 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:32.813028 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:33.312894 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:31.080211 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:33.582580 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:33.788390 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:36.287173 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:32.529206 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:35.028518 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:37.031307 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:33.813345 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:34.313510 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:34.813678 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:35.313121 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:35.813541 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:36.312890 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:36.813411 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:37.313228 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:37.813599 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:38.313526 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:36.081107 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:38.582581 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:38.287892 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:40.787491 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:39.529329 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:42.028378 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:38.812744 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:39.313501 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:39.813568 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:40.313585 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:40.813078 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:41.312734 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:41.812823 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:42.312829 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:42.813108 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:43.312983 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:41.080457 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:43.082314 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:42.787697 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:45.287260 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:47.287367 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:44.028619 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:46.029083 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:43.813614 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:44.313522 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:44.813162 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:45.313000 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:45.813166 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:46.313147 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:46.812791 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:47.312810 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:47.812775 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:48.313432 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:45.581743 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:47.582153 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:49.287859 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:51.288012 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:48.029471 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:50.529718 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:48.813154 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:49.312838 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:49.813340 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:50.312925 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:50.813287 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:51.312785 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:51.813687 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:52.313111 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:52.812802 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:53.313097 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:50.081002 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:52.581311 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:53.288532 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:55.788221 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:53.028591 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:55.529910 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:53.813587 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:54.313181 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:54.812993 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:55.313464 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:55.813050 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:56.312920 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:56.813705 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:57.313622 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:57.812842 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:58.313381 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:54.581795 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:57.080722 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:58.288309 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:00.786850 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:58.028613 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:00.529908 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:58.812816 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:59.312817 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:59.813035 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:00.313444 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:00.813287 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:01.312763 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:01.813721 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:02.313131 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:02.813297 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:03.313697 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:59.581769 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:02.080943 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:02.787929 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:05.287833 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:07.287889 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:03.029275 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:05.029418 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:07.030052 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:03.813314 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:04.313147 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:04.813585 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:05.313388 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:05.813722 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:06.313190 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:06.812942 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:07.313516 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:07.813321 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:08.313684 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:04.081681 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:06.582635 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:09.289282 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:11.788208 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:09.528140 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:11.529355 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:08.813457 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:09.312972 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:09.812986 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:10.313838 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:10.813128 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:11.312866 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:11.812982 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:12.312768 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:12.813426 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:13.313370 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:09.080839 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:11.581560 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:14.287327 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:16.288546 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:13.529804 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:16.028749 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:13.812803 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:14.313174 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:14.813162 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:15.312724 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:15.813166 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:16.313662 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:16.813497 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:17.313422 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:17.813587 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:18.313749 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:14.080371 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:16.582575 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:18.584549 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:18.787976 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:20.788184 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:18.029709 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:20.529523 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:18.813301 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:19.313610 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:19.813293 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:20.313667 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:20.813161 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:21.313709 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:21.813699 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:22.313185 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:22.813328 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:23.313612 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:21.080013 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:23.080298 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:23.287582 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:25.787381 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:23.029776 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:25.529747 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:23.812846 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:24.313129 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:24.813728 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:25.313735 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:25.813439 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:26.313406 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:26.813597 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:27.313484 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:27.813672 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:28.313161 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:25.081823 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:27.581035 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:27.787632 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:30.287493 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:32.289889 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:27.530494 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:30.028046 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:32.030227 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:28.813541 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:28.813633 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:28.855334 1076050 cri.go:89] found id: ""
	I0127 15:39:28.855368 1076050 logs.go:282] 0 containers: []
	W0127 15:39:28.855376 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:28.855383 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:28.855466 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:28.892923 1076050 cri.go:89] found id: ""
	I0127 15:39:28.892959 1076050 logs.go:282] 0 containers: []
	W0127 15:39:28.892972 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:28.892980 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:28.893081 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:28.942133 1076050 cri.go:89] found id: ""
	I0127 15:39:28.942163 1076050 logs.go:282] 0 containers: []
	W0127 15:39:28.942187 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:28.942196 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:28.942261 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:28.980950 1076050 cri.go:89] found id: ""
	I0127 15:39:28.980978 1076050 logs.go:282] 0 containers: []
	W0127 15:39:28.980988 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:28.980995 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:28.981080 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:29.022166 1076050 cri.go:89] found id: ""
	I0127 15:39:29.022200 1076050 logs.go:282] 0 containers: []
	W0127 15:39:29.022209 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:29.022215 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:29.022269 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:29.060408 1076050 cri.go:89] found id: ""
	I0127 15:39:29.060439 1076050 logs.go:282] 0 containers: []
	W0127 15:39:29.060447 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:29.060454 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:29.060521 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:29.100890 1076050 cri.go:89] found id: ""
	I0127 15:39:29.100924 1076050 logs.go:282] 0 containers: []
	W0127 15:39:29.100935 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:29.100944 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:29.101075 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:29.139688 1076050 cri.go:89] found id: ""
	I0127 15:39:29.139720 1076050 logs.go:282] 0 containers: []
	W0127 15:39:29.139729 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:29.139741 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:29.139752 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:29.181255 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:29.181288 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:29.232218 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:29.232260 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:29.245853 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:29.245881 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:29.382461 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:29.382487 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:29.382501 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:31.957162 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:31.971225 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:31.971290 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:32.026501 1076050 cri.go:89] found id: ""
	I0127 15:39:32.026535 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.026546 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:32.026555 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:32.026624 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:32.066192 1076050 cri.go:89] found id: ""
	I0127 15:39:32.066232 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.066244 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:32.066253 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:32.066334 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:32.106017 1076050 cri.go:89] found id: ""
	I0127 15:39:32.106047 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.106056 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:32.106062 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:32.106130 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:32.146534 1076050 cri.go:89] found id: ""
	I0127 15:39:32.146565 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.146575 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:32.146581 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:32.146644 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:32.186982 1076050 cri.go:89] found id: ""
	I0127 15:39:32.187007 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.187016 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:32.187022 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:32.187077 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:32.229657 1076050 cri.go:89] found id: ""
	I0127 15:39:32.229685 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.229693 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:32.229700 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:32.229756 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:32.267228 1076050 cri.go:89] found id: ""
	I0127 15:39:32.267259 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.267268 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:32.267275 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:32.267340 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:32.305366 1076050 cri.go:89] found id: ""
	I0127 15:39:32.305394 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.305402 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:32.305412 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:32.305424 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:32.345293 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:32.345335 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:32.395863 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:32.395922 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:32.411092 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:32.411133 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:32.493214 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:32.493248 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:32.493266 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:30.082518 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:32.580263 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:34.787461 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:37.287358 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:34.530278 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:37.028574 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:35.077133 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:35.094000 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:35.094095 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:35.132448 1076050 cri.go:89] found id: ""
	I0127 15:39:35.132488 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.132500 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:35.132508 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:35.132583 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:35.167599 1076050 cri.go:89] found id: ""
	I0127 15:39:35.167632 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.167644 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:35.167653 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:35.167713 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:35.204383 1076050 cri.go:89] found id: ""
	I0127 15:39:35.204429 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.204438 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:35.204444 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:35.204503 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:35.241382 1076050 cri.go:89] found id: ""
	I0127 15:39:35.241411 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.241423 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:35.241431 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:35.241500 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:35.278253 1076050 cri.go:89] found id: ""
	I0127 15:39:35.278280 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.278289 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:35.278296 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:35.278357 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:35.320389 1076050 cri.go:89] found id: ""
	I0127 15:39:35.320418 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.320425 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:35.320432 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:35.320498 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:35.360563 1076050 cri.go:89] found id: ""
	I0127 15:39:35.360592 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.360604 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:35.360613 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:35.360670 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:35.396537 1076050 cri.go:89] found id: ""
	I0127 15:39:35.396580 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.396593 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:35.396609 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:35.396628 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:35.474518 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:35.474554 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:35.474575 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:35.554396 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:35.554445 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:35.599042 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:35.599100 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:35.652578 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:35.652619 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:38.167582 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:38.182164 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:38.182250 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:38.218993 1076050 cri.go:89] found id: ""
	I0127 15:39:38.219025 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.219034 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:38.219040 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:38.219121 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:38.257547 1076050 cri.go:89] found id: ""
	I0127 15:39:38.257575 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.257584 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:38.257590 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:38.257643 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:38.295251 1076050 cri.go:89] found id: ""
	I0127 15:39:38.295287 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.295299 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:38.295307 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:38.295378 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:38.339567 1076050 cri.go:89] found id: ""
	I0127 15:39:38.339605 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.339621 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:38.339629 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:38.339697 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:38.375969 1076050 cri.go:89] found id: ""
	I0127 15:39:38.376007 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.376019 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:38.376028 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:38.376097 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:38.429385 1076050 cri.go:89] found id: ""
	I0127 15:39:38.429416 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.429427 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:38.429435 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:38.429503 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:34.587256 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:37.080093 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:39.287413 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:41.287958 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:39.028638 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:41.029306 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:38.481564 1076050 cri.go:89] found id: ""
	I0127 15:39:38.481604 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.481618 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:38.481627 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:38.481700 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:38.535177 1076050 cri.go:89] found id: ""
	I0127 15:39:38.535203 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.535211 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:38.535223 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:38.535238 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:38.549306 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:38.549349 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:38.622573 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:38.622607 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:38.622625 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:38.697323 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:38.697363 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:38.738950 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:38.738981 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:41.298384 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:41.312088 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:41.312162 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:41.349779 1076050 cri.go:89] found id: ""
	I0127 15:39:41.349808 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.349817 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:41.349824 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:41.349887 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:41.387675 1076050 cri.go:89] found id: ""
	I0127 15:39:41.387715 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.387732 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:41.387740 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:41.387797 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:41.424135 1076050 cri.go:89] found id: ""
	I0127 15:39:41.424166 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.424175 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:41.424181 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:41.424246 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:41.464733 1076050 cri.go:89] found id: ""
	I0127 15:39:41.464760 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.464768 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:41.464774 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:41.464835 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:41.506669 1076050 cri.go:89] found id: ""
	I0127 15:39:41.506700 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.506713 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:41.506725 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:41.506793 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:41.548804 1076050 cri.go:89] found id: ""
	I0127 15:39:41.548833 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.548842 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:41.548848 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:41.548911 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:41.590203 1076050 cri.go:89] found id: ""
	I0127 15:39:41.590233 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.590245 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:41.590253 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:41.590318 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:41.625407 1076050 cri.go:89] found id: ""
	I0127 15:39:41.625434 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.625442 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:41.625452 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:41.625466 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:41.702765 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:41.702808 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:41.745622 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:41.745662 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:41.799894 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:41.799943 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:41.814151 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:41.814180 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:41.899042 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:39.580910 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:41.581608 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:43.587620 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:43.787400 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:45.787456 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:43.529161 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:46.028736 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:44.399328 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:44.420663 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:44.420731 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:44.484562 1076050 cri.go:89] found id: ""
	I0127 15:39:44.484595 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.484606 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:44.484616 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:44.484681 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:44.555635 1076050 cri.go:89] found id: ""
	I0127 15:39:44.555663 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.555672 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:44.555678 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:44.555730 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:44.598564 1076050 cri.go:89] found id: ""
	I0127 15:39:44.598592 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.598600 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:44.598606 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:44.598663 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:44.639072 1076050 cri.go:89] found id: ""
	I0127 15:39:44.639115 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.639126 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:44.639134 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:44.639200 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:44.677620 1076050 cri.go:89] found id: ""
	I0127 15:39:44.677652 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.677662 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:44.677670 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:44.677730 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:44.714227 1076050 cri.go:89] found id: ""
	I0127 15:39:44.714263 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.714273 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:44.714281 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:44.714357 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:44.753864 1076050 cri.go:89] found id: ""
	I0127 15:39:44.753898 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.753911 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:44.753919 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:44.753987 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:44.790576 1076050 cri.go:89] found id: ""
	I0127 15:39:44.790603 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.790613 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:44.790625 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:44.790641 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:44.864427 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:44.864468 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:44.904955 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:44.904989 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:44.959074 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:44.959137 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:44.976053 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:44.976082 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:45.062578 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:47.562901 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:47.576665 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:47.576751 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:47.615806 1076050 cri.go:89] found id: ""
	I0127 15:39:47.615842 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.615855 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:47.615864 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:47.615936 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:47.651913 1076050 cri.go:89] found id: ""
	I0127 15:39:47.651947 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.651966 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:47.651974 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:47.652045 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:47.688572 1076050 cri.go:89] found id: ""
	I0127 15:39:47.688604 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.688614 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:47.688620 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:47.688680 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:47.726688 1076050 cri.go:89] found id: ""
	I0127 15:39:47.726725 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.726737 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:47.726745 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:47.726815 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:47.768385 1076050 cri.go:89] found id: ""
	I0127 15:39:47.768413 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.768424 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:47.768433 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:47.768493 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:47.806575 1076050 cri.go:89] found id: ""
	I0127 15:39:47.806601 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.806609 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:47.806615 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:47.806668 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:47.843234 1076050 cri.go:89] found id: ""
	I0127 15:39:47.843259 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.843267 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:47.843273 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:47.843325 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:47.882360 1076050 cri.go:89] found id: ""
	I0127 15:39:47.882398 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.882411 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:47.882426 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:47.882445 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:47.936678 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:47.936721 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:47.951861 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:47.951889 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:48.027451 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:48.027479 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:48.027497 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:48.110314 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:48.110362 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:46.079379 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:48.081369 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:47.788330 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:50.288398 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:52.281192 1074659 pod_ready.go:82] duration metric: took 4m0.000550048s for pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace to be "Ready" ...
	E0127 15:39:52.281240 1074659 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 15:39:52.281264 1074659 pod_ready.go:39] duration metric: took 4m13.057238138s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:39:52.281309 1074659 kubeadm.go:597] duration metric: took 4m21.316884653s to restartPrimaryControlPlane
	W0127 15:39:52.281435 1074659 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 15:39:52.281477 1074659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:39:48.029038 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:50.529674 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:50.653993 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:50.668077 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:50.668150 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:50.708132 1076050 cri.go:89] found id: ""
	I0127 15:39:50.708160 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.708168 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:50.708175 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:50.708244 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:50.748371 1076050 cri.go:89] found id: ""
	I0127 15:39:50.748400 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.748409 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:50.748415 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:50.748471 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:50.785148 1076050 cri.go:89] found id: ""
	I0127 15:39:50.785183 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.785194 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:50.785202 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:50.785267 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:50.820827 1076050 cri.go:89] found id: ""
	I0127 15:39:50.820864 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.820874 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:50.820881 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:50.820948 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:50.859566 1076050 cri.go:89] found id: ""
	I0127 15:39:50.859602 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.859615 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:50.859623 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:50.859699 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:50.896227 1076050 cri.go:89] found id: ""
	I0127 15:39:50.896263 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.896276 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:50.896285 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:50.896352 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:50.933357 1076050 cri.go:89] found id: ""
	I0127 15:39:50.933393 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.933405 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:50.933414 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:50.933478 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:50.968264 1076050 cri.go:89] found id: ""
	I0127 15:39:50.968303 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.968313 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:50.968324 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:50.968338 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:51.026708 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:51.026754 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:51.041436 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:51.041475 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:51.110945 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:51.110967 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:51.110980 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:51.192815 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:51.192858 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:50.581346 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:53.080934 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:52.529918 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:55.028235 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:57.029052 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:53.737031 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:53.751175 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:53.751266 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:53.793720 1076050 cri.go:89] found id: ""
	I0127 15:39:53.793748 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.793757 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:53.793764 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:53.793822 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:53.832993 1076050 cri.go:89] found id: ""
	I0127 15:39:53.833065 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.833074 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:53.833080 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:53.833139 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:53.872089 1076050 cri.go:89] found id: ""
	I0127 15:39:53.872122 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.872133 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:53.872147 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:53.872205 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:53.914262 1076050 cri.go:89] found id: ""
	I0127 15:39:53.914298 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.914311 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:53.914321 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:53.914400 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:53.954035 1076050 cri.go:89] found id: ""
	I0127 15:39:53.954073 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.954085 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:53.954093 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:53.954158 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:53.994248 1076050 cri.go:89] found id: ""
	I0127 15:39:53.994306 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.994320 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:53.994329 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:53.994407 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:54.031811 1076050 cri.go:89] found id: ""
	I0127 15:39:54.031836 1076050 logs.go:282] 0 containers: []
	W0127 15:39:54.031847 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:54.031855 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:54.031917 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:54.070159 1076050 cri.go:89] found id: ""
	I0127 15:39:54.070199 1076050 logs.go:282] 0 containers: []
	W0127 15:39:54.070212 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:54.070225 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:54.070242 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:54.112540 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:54.112575 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:54.163657 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:54.163710 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:54.178720 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:54.178757 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:54.255558 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:54.255596 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:54.255613 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:56.834676 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:56.848186 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:56.848265 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:56.885958 1076050 cri.go:89] found id: ""
	I0127 15:39:56.885984 1076050 logs.go:282] 0 containers: []
	W0127 15:39:56.885993 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:56.885999 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:56.886050 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:56.925195 1076050 cri.go:89] found id: ""
	I0127 15:39:56.925233 1076050 logs.go:282] 0 containers: []
	W0127 15:39:56.925247 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:56.925256 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:56.925328 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:56.967597 1076050 cri.go:89] found id: ""
	I0127 15:39:56.967631 1076050 logs.go:282] 0 containers: []
	W0127 15:39:56.967644 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:56.967654 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:56.967719 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:57.005973 1076050 cri.go:89] found id: ""
	I0127 15:39:57.006008 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.006021 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:57.006029 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:57.006104 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:57.042547 1076050 cri.go:89] found id: ""
	I0127 15:39:57.042581 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.042593 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:57.042601 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:57.042664 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:57.084492 1076050 cri.go:89] found id: ""
	I0127 15:39:57.084517 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.084525 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:57.084531 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:57.084581 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:57.120954 1076050 cri.go:89] found id: ""
	I0127 15:39:57.120988 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.121032 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:57.121039 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:57.121100 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:57.159620 1076050 cri.go:89] found id: ""
	I0127 15:39:57.159657 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.159668 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:57.159681 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:57.159696 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:57.203209 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:57.203245 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:57.253929 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:57.253972 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:57.268430 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:57.268463 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:57.338716 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:57.338741 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:57.338760 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:55.082397 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:57.581203 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:59.528435 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:01.530232 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:59.918299 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:59.933577 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:59.933650 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:59.971396 1076050 cri.go:89] found id: ""
	I0127 15:39:59.971437 1076050 logs.go:282] 0 containers: []
	W0127 15:39:59.971449 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:59.971457 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:59.971516 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:00.012852 1076050 cri.go:89] found id: ""
	I0127 15:40:00.012890 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.012902 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:00.012910 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:00.012983 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:00.053636 1076050 cri.go:89] found id: ""
	I0127 15:40:00.053673 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.053685 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:00.053693 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:00.053757 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:00.091584 1076050 cri.go:89] found id: ""
	I0127 15:40:00.091615 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.091626 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:00.091634 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:00.091698 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:00.126906 1076050 cri.go:89] found id: ""
	I0127 15:40:00.126936 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.126945 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:00.126957 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:00.127012 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:00.164308 1076050 cri.go:89] found id: ""
	I0127 15:40:00.164345 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.164354 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:00.164360 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:00.164412 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:00.200695 1076050 cri.go:89] found id: ""
	I0127 15:40:00.200727 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.200739 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:00.200750 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:00.200807 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:00.239910 1076050 cri.go:89] found id: ""
	I0127 15:40:00.239938 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.239947 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:00.239958 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:00.239970 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:00.255441 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:00.255468 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:00.333737 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:00.333767 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:00.333782 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:00.417199 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:00.417256 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:00.461683 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:00.461711 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:03.016318 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:03.033626 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:03.033707 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:03.070895 1076050 cri.go:89] found id: ""
	I0127 15:40:03.070929 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.070940 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:03.070948 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:03.071011 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:03.107691 1076050 cri.go:89] found id: ""
	I0127 15:40:03.107725 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.107736 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:03.107742 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:03.107806 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:03.144989 1076050 cri.go:89] found id: ""
	I0127 15:40:03.145032 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.145044 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:03.145052 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:03.145106 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:03.182441 1076050 cri.go:89] found id: ""
	I0127 15:40:03.182473 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.182482 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:03.182488 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:03.182540 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:03.220251 1076050 cri.go:89] found id: ""
	I0127 15:40:03.220286 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.220298 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:03.220306 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:03.220366 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:03.258761 1076050 cri.go:89] found id: ""
	I0127 15:40:03.258799 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.258810 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:03.258818 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:03.258888 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:03.307236 1076050 cri.go:89] found id: ""
	I0127 15:40:03.307274 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.307283 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:03.307289 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:03.307352 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:03.354451 1076050 cri.go:89] found id: ""
	I0127 15:40:03.354487 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.354498 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:03.354509 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:03.354524 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:03.405369 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:03.405412 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:03.420837 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:03.420866 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 15:40:00.081973 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:02.581659 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:04.030283 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:06.529988 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	W0127 15:40:03.496384 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:03.496420 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:03.496435 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:03.576992 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:03.577066 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:06.128185 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:06.142266 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:06.142381 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:06.181053 1076050 cri.go:89] found id: ""
	I0127 15:40:06.181087 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.181097 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:06.181106 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:06.181162 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:06.218206 1076050 cri.go:89] found id: ""
	I0127 15:40:06.218236 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.218245 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:06.218251 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:06.218304 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:06.255094 1076050 cri.go:89] found id: ""
	I0127 15:40:06.255138 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.255158 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:06.255165 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:06.255221 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:06.295564 1076050 cri.go:89] found id: ""
	I0127 15:40:06.295598 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.295611 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:06.295620 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:06.295683 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:06.332518 1076050 cri.go:89] found id: ""
	I0127 15:40:06.332552 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.332561 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:06.332568 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:06.332641 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:06.371503 1076050 cri.go:89] found id: ""
	I0127 15:40:06.371532 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.371540 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:06.371547 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:06.371599 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:06.409091 1076050 cri.go:89] found id: ""
	I0127 15:40:06.409119 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.409128 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:06.409135 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:06.409192 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:06.445033 1076050 cri.go:89] found id: ""
	I0127 15:40:06.445078 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.445092 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:06.445113 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:06.445132 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:06.460284 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:06.460321 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:06.543807 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:06.543831 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:06.543844 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:06.626884 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:06.626929 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:06.670309 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:06.670350 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:05.075392 1074908 pod_ready.go:82] duration metric: took 4m0.001148212s for pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace to be "Ready" ...
	E0127 15:40:05.075435 1074908 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 15:40:05.075460 1074908 pod_ready.go:39] duration metric: took 4m14.604653981s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:05.075504 1074908 kubeadm.go:597] duration metric: took 4m23.17285487s to restartPrimaryControlPlane
	W0127 15:40:05.075610 1074908 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 15:40:05.075649 1074908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:40:09.029666 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:11.529388 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:09.219752 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:09.234460 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:09.234537 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:09.271526 1076050 cri.go:89] found id: ""
	I0127 15:40:09.271574 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.271584 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:09.271590 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:09.271661 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:09.312643 1076050 cri.go:89] found id: ""
	I0127 15:40:09.312681 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.312696 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:09.312705 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:09.312771 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:09.351697 1076050 cri.go:89] found id: ""
	I0127 15:40:09.351736 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.351749 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:09.351757 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:09.351825 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:09.390289 1076050 cri.go:89] found id: ""
	I0127 15:40:09.390315 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.390324 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:09.390332 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:09.390400 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:09.431515 1076050 cri.go:89] found id: ""
	I0127 15:40:09.431548 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.431559 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:09.431567 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:09.431634 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:09.473134 1076050 cri.go:89] found id: ""
	I0127 15:40:09.473170 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.473182 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:09.473190 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:09.473261 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:09.516505 1076050 cri.go:89] found id: ""
	I0127 15:40:09.516542 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.516556 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:09.516564 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:09.516634 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:09.560596 1076050 cri.go:89] found id: ""
	I0127 15:40:09.560638 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.560649 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:09.560662 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:09.560678 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:09.616174 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:09.616219 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:09.631586 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:09.631622 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:09.706642 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:09.706677 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:09.706696 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:09.780834 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:09.780883 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:12.323632 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:12.337043 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:12.337121 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:12.371851 1076050 cri.go:89] found id: ""
	I0127 15:40:12.371875 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.371884 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:12.371891 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:12.371963 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:12.409962 1076050 cri.go:89] found id: ""
	I0127 15:40:12.409997 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.410010 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:12.410018 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:12.410095 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:12.445440 1076050 cri.go:89] found id: ""
	I0127 15:40:12.445473 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.445482 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:12.445489 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:12.445544 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:12.481239 1076050 cri.go:89] found id: ""
	I0127 15:40:12.481270 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.481282 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:12.481303 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:12.481372 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:12.520832 1076050 cri.go:89] found id: ""
	I0127 15:40:12.520859 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.520867 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:12.520873 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:12.520923 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:12.559781 1076050 cri.go:89] found id: ""
	I0127 15:40:12.559818 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.559829 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:12.559838 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:12.559901 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:12.597821 1076050 cri.go:89] found id: ""
	I0127 15:40:12.597861 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.597873 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:12.597882 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:12.597944 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:12.635939 1076050 cri.go:89] found id: ""
	I0127 15:40:12.635974 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.635986 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:12.635998 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:12.636013 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:12.709126 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:12.709150 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:12.709163 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:12.792573 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:12.792617 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:12.832327 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:12.832368 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:12.884984 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:12.885039 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:14.028951 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:16.029783 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:15.401225 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:15.415906 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:15.415993 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:15.457989 1076050 cri.go:89] found id: ""
	I0127 15:40:15.458021 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.458031 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:15.458038 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:15.458100 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:15.493789 1076050 cri.go:89] found id: ""
	I0127 15:40:15.493836 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.493852 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:15.493860 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:15.493927 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:15.535193 1076050 cri.go:89] found id: ""
	I0127 15:40:15.535219 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.535227 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:15.535233 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:15.535298 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:15.574983 1076050 cri.go:89] found id: ""
	I0127 15:40:15.575016 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.575030 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:15.575036 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:15.575107 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:15.613038 1076050 cri.go:89] found id: ""
	I0127 15:40:15.613072 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.613083 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:15.613091 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:15.613166 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:15.651439 1076050 cri.go:89] found id: ""
	I0127 15:40:15.651473 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.651483 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:15.651489 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:15.651559 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:15.697895 1076050 cri.go:89] found id: ""
	I0127 15:40:15.697933 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.697945 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:15.697953 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:15.698026 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:15.736368 1076050 cri.go:89] found id: ""
	I0127 15:40:15.736397 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.736405 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:15.736416 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:15.736431 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:15.788954 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:15.789002 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:15.803162 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:15.803193 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:15.878504 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:15.878538 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:15.878557 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:15.955134 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:15.955186 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:20.131059 1074659 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.849552205s)
	I0127 15:40:20.131159 1074659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:40:20.154965 1074659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:40:20.170718 1074659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:40:20.182783 1074659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:40:20.182813 1074659 kubeadm.go:157] found existing configuration files:
	
	I0127 15:40:20.182879 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:40:20.196772 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:40:20.196838 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:40:20.219107 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:40:20.231548 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:40:20.231633 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:40:20.243226 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:40:20.262500 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:40:20.262565 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:40:20.273568 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:40:20.283606 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:40:20.283675 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:40:20.294389 1074659 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:40:20.475280 1074659 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:40:18.529412 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:21.029561 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:18.497724 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:18.519382 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:18.519463 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:18.556458 1076050 cri.go:89] found id: ""
	I0127 15:40:18.556495 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.556504 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:18.556511 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:18.556566 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:18.593672 1076050 cri.go:89] found id: ""
	I0127 15:40:18.593700 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.593717 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:18.593726 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:18.593794 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:18.632353 1076050 cri.go:89] found id: ""
	I0127 15:40:18.632393 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.632404 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:18.632412 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:18.632467 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:18.668613 1076050 cri.go:89] found id: ""
	I0127 15:40:18.668647 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.668659 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:18.668668 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:18.668738 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:18.706751 1076050 cri.go:89] found id: ""
	I0127 15:40:18.706786 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.706798 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:18.706806 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:18.706872 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:18.745670 1076050 cri.go:89] found id: ""
	I0127 15:40:18.745706 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.745719 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:18.745728 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:18.745798 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:18.783666 1076050 cri.go:89] found id: ""
	I0127 15:40:18.783696 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.783708 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:18.783716 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:18.783783 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:18.821591 1076050 cri.go:89] found id: ""
	I0127 15:40:18.821626 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.821637 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:18.821652 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:18.821669 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:18.895554 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:18.895582 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:18.895600 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:18.977366 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:18.977416 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:19.020341 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:19.020374 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:19.073493 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:19.073537 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:21.589182 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:21.607125 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:21.607245 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:21.654887 1076050 cri.go:89] found id: ""
	I0127 15:40:21.654922 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.654933 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:21.654942 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:21.655013 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:21.703233 1076050 cri.go:89] found id: ""
	I0127 15:40:21.703279 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.703289 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:21.703298 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:21.703440 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:21.744227 1076050 cri.go:89] found id: ""
	I0127 15:40:21.744260 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.744273 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:21.744286 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:21.744356 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:21.786397 1076050 cri.go:89] found id: ""
	I0127 15:40:21.786430 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.786445 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:21.786454 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:21.786517 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:21.831934 1076050 cri.go:89] found id: ""
	I0127 15:40:21.831963 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.831974 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:21.831980 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:21.832036 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:21.877230 1076050 cri.go:89] found id: ""
	I0127 15:40:21.877264 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.877275 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:21.877283 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:21.877351 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:21.923993 1076050 cri.go:89] found id: ""
	I0127 15:40:21.924026 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.924038 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:21.924047 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:21.924109 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:21.963890 1076050 cri.go:89] found id: ""
	I0127 15:40:21.963922 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.963931 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:21.963942 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:21.963958 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:22.010706 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:22.010743 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:22.070053 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:22.070096 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:22.085574 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:22.085604 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:22.163198 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:22.163228 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:22.163245 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:23.031094 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:24.523077 1075160 pod_ready.go:82] duration metric: took 4m0.001138229s for pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace to be "Ready" ...
	E0127 15:40:24.523130 1075160 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 15:40:24.523156 1075160 pod_ready.go:39] duration metric: took 4m14.040193884s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:24.523186 1075160 kubeadm.go:597] duration metric: took 4m21.511137654s to restartPrimaryControlPlane
	W0127 15:40:24.523251 1075160 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 15:40:24.523280 1075160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:40:24.747046 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:24.761103 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:24.761194 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:24.806570 1076050 cri.go:89] found id: ""
	I0127 15:40:24.806659 1076050 logs.go:282] 0 containers: []
	W0127 15:40:24.806679 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:24.806689 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:24.806755 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:24.854651 1076050 cri.go:89] found id: ""
	I0127 15:40:24.854684 1076050 logs.go:282] 0 containers: []
	W0127 15:40:24.854697 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:24.854705 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:24.854773 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:24.915668 1076050 cri.go:89] found id: ""
	I0127 15:40:24.915705 1076050 logs.go:282] 0 containers: []
	W0127 15:40:24.915718 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:24.915728 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:24.915794 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:24.975570 1076050 cri.go:89] found id: ""
	I0127 15:40:24.975610 1076050 logs.go:282] 0 containers: []
	W0127 15:40:24.975623 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:24.975632 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:24.975704 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:25.025853 1076050 cri.go:89] found id: ""
	I0127 15:40:25.025885 1076050 logs.go:282] 0 containers: []
	W0127 15:40:25.025896 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:25.025903 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:25.025980 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:25.064940 1076050 cri.go:89] found id: ""
	I0127 15:40:25.064976 1076050 logs.go:282] 0 containers: []
	W0127 15:40:25.064987 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:25.064996 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:25.065082 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:25.110507 1076050 cri.go:89] found id: ""
	I0127 15:40:25.110539 1076050 logs.go:282] 0 containers: []
	W0127 15:40:25.110549 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:25.110558 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:25.110622 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:25.150241 1076050 cri.go:89] found id: ""
	I0127 15:40:25.150288 1076050 logs.go:282] 0 containers: []
	W0127 15:40:25.150299 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:25.150313 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:25.150330 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:25.243205 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:25.243238 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:25.243255 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:25.323856 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:25.323900 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:25.367207 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:25.367245 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:25.429072 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:25.429120 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:27.945904 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:27.959618 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:27.959708 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:27.999655 1076050 cri.go:89] found id: ""
	I0127 15:40:27.999685 1076050 logs.go:282] 0 containers: []
	W0127 15:40:27.999697 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:27.999705 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:27.999768 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:28.039662 1076050 cri.go:89] found id: ""
	I0127 15:40:28.039695 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.039708 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:28.039716 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:28.039786 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:28.075418 1076050 cri.go:89] found id: ""
	I0127 15:40:28.075451 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.075462 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:28.075472 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:28.075542 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:28.114964 1076050 cri.go:89] found id: ""
	I0127 15:40:28.115023 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.115036 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:28.115045 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:28.115106 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:28.153086 1076050 cri.go:89] found id: ""
	I0127 15:40:28.153115 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.153126 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:28.153135 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:28.153198 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:28.189564 1076050 cri.go:89] found id: ""
	I0127 15:40:28.189597 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.189607 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:28.189623 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:28.189680 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:28.228037 1076050 cri.go:89] found id: ""
	I0127 15:40:28.228067 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.228076 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:28.228083 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:28.228163 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:28.277124 1076050 cri.go:89] found id: ""
	I0127 15:40:28.277155 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.277168 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:28.277179 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:28.277192 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:28.340183 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:28.340231 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:28.356822 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:28.356854 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:28.428923 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:28.428951 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:28.428968 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:28.833666 1074659 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 15:40:28.833746 1074659 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:40:28.833840 1074659 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:40:28.833927 1074659 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:40:28.834008 1074659 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 15:40:28.834082 1074659 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:40:28.835576 1074659 out.go:235]   - Generating certificates and keys ...
	I0127 15:40:28.835644 1074659 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:40:28.835701 1074659 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:40:28.835776 1074659 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:40:28.835840 1074659 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:40:28.835918 1074659 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:40:28.835984 1074659 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:40:28.836079 1074659 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:40:28.836170 1074659 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:40:28.836279 1074659 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:40:28.836382 1074659 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:40:28.836440 1074659 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:40:28.836506 1074659 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:40:28.836564 1074659 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:40:28.836645 1074659 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 15:40:28.836728 1074659 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:40:28.836800 1074659 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:40:28.836889 1074659 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:40:28.836973 1074659 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:40:28.837079 1074659 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:40:28.838668 1074659 out.go:235]   - Booting up control plane ...
	I0127 15:40:28.838772 1074659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:40:28.838882 1074659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:40:28.838967 1074659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:40:28.839120 1074659 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:40:28.839212 1074659 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:40:28.839261 1074659 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:40:28.839412 1074659 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 15:40:28.839527 1074659 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 15:40:28.839621 1074659 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.133738ms
	I0127 15:40:28.839718 1074659 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 15:40:28.839793 1074659 kubeadm.go:310] [api-check] The API server is healthy after 5.001467165s
	I0127 15:40:28.839883 1074659 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 15:40:28.840019 1074659 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 15:40:28.840098 1074659 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 15:40:28.840257 1074659 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-458006 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 15:40:28.840304 1074659 kubeadm.go:310] [bootstrap-token] Using token: ysn4g1.5k9s54b5xvzc8py2
	I0127 15:40:28.841707 1074659 out.go:235]   - Configuring RBAC rules ...
	I0127 15:40:28.841821 1074659 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 15:40:28.841908 1074659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 15:40:28.842072 1074659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 15:40:28.842254 1074659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 15:40:28.842425 1074659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 15:40:28.842542 1074659 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 15:40:28.842654 1074659 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 15:40:28.842695 1074659 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 15:40:28.842739 1074659 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 15:40:28.842746 1074659 kubeadm.go:310] 
	I0127 15:40:28.842794 1074659 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 15:40:28.842803 1074659 kubeadm.go:310] 
	I0127 15:40:28.842866 1074659 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 15:40:28.842878 1074659 kubeadm.go:310] 
	I0127 15:40:28.842923 1074659 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 15:40:28.843010 1074659 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 15:40:28.843103 1074659 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 15:40:28.843112 1074659 kubeadm.go:310] 
	I0127 15:40:28.843207 1074659 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 15:40:28.843222 1074659 kubeadm.go:310] 
	I0127 15:40:28.843297 1074659 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 15:40:28.843312 1074659 kubeadm.go:310] 
	I0127 15:40:28.843389 1074659 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 15:40:28.843486 1074659 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 15:40:28.843560 1074659 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 15:40:28.843568 1074659 kubeadm.go:310] 
	I0127 15:40:28.843641 1074659 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 15:40:28.843710 1074659 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 15:40:28.843716 1074659 kubeadm.go:310] 
	I0127 15:40:28.843788 1074659 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ysn4g1.5k9s54b5xvzc8py2 \
	I0127 15:40:28.843875 1074659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 \
	I0127 15:40:28.843899 1074659 kubeadm.go:310] 	--control-plane 
	I0127 15:40:28.843908 1074659 kubeadm.go:310] 
	I0127 15:40:28.844015 1074659 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 15:40:28.844024 1074659 kubeadm.go:310] 
	I0127 15:40:28.844090 1074659 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ysn4g1.5k9s54b5xvzc8py2 \
	I0127 15:40:28.844200 1074659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 
	I0127 15:40:28.844221 1074659 cni.go:84] Creating CNI manager for ""
	I0127 15:40:28.844233 1074659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:40:28.845800 1074659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 15:40:28.847251 1074659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 15:40:28.858165 1074659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 15:40:28.881328 1074659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 15:40:28.881400 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:28.881455 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-458006 minikube.k8s.io/updated_at=2025_01_27T15_40_28_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743 minikube.k8s.io/name=no-preload-458006 minikube.k8s.io/primary=true
	I0127 15:40:28.897996 1074659 ops.go:34] apiserver oom_adj: -16
	I0127 15:40:29.095553 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:29.596344 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:30.096320 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:30.596512 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:31.096689 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:31.596534 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:32.096361 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:32.595892 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:33.095702 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:33.238790 1074659 kubeadm.go:1113] duration metric: took 4.357463541s to wait for elevateKubeSystemPrivileges
	I0127 15:40:33.238848 1074659 kubeadm.go:394] duration metric: took 5m2.327511742s to StartCluster
	I0127 15:40:33.238888 1074659 settings.go:142] acquiring lock: {Name:mk76722a8563186e0a733747d866232f054026c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:40:33.239099 1074659 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:40:33.240861 1074659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:40:33.241710 1074659 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.30 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 15:40:33.241765 1074659 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 15:40:33.241896 1074659 addons.go:69] Setting storage-provisioner=true in profile "no-preload-458006"
	I0127 15:40:33.241924 1074659 addons.go:238] Setting addon storage-provisioner=true in "no-preload-458006"
	W0127 15:40:33.241936 1074659 addons.go:247] addon storage-provisioner should already be in state true
	I0127 15:40:33.241970 1074659 config.go:182] Loaded profile config "no-preload-458006": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:40:33.241993 1074659 host.go:66] Checking if "no-preload-458006" exists ...
	I0127 15:40:33.242098 1074659 addons.go:69] Setting default-storageclass=true in profile "no-preload-458006"
	I0127 15:40:33.242136 1074659 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-458006"
	I0127 15:40:33.242491 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.242558 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.242562 1074659 addons.go:69] Setting dashboard=true in profile "no-preload-458006"
	I0127 15:40:33.242579 1074659 addons.go:238] Setting addon dashboard=true in "no-preload-458006"
	W0127 15:40:33.242587 1074659 addons.go:247] addon dashboard should already be in state true
	I0127 15:40:33.242619 1074659 host.go:66] Checking if "no-preload-458006" exists ...
	I0127 15:40:33.242642 1074659 addons.go:69] Setting metrics-server=true in profile "no-preload-458006"
	I0127 15:40:33.242681 1074659 addons.go:238] Setting addon metrics-server=true in "no-preload-458006"
	W0127 15:40:33.242703 1074659 addons.go:247] addon metrics-server should already be in state true
	I0127 15:40:33.242748 1074659 host.go:66] Checking if "no-preload-458006" exists ...
	I0127 15:40:33.242982 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.243002 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.243017 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.243038 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.243162 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.243195 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.246220 1074659 out.go:177] * Verifying Kubernetes components...
	I0127 15:40:33.247844 1074659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:40:33.260866 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42951
	I0127 15:40:33.260900 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35127
	I0127 15:40:33.260867 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I0127 15:40:33.261687 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.261705 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.261805 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39555
	I0127 15:40:33.262293 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.262298 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.262311 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.262320 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.262394 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.262663 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.262770 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.262824 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.262973 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.262988 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.263265 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.263294 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.263301 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.263705 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.263777 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.263793 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.264103 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.264138 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.264160 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.265173 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.265220 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.266841 1074659 addons.go:238] Setting addon default-storageclass=true in "no-preload-458006"
	W0127 15:40:33.266861 1074659 addons.go:247] addon default-storageclass should already be in state true
	I0127 15:40:33.266882 1074659 host.go:66] Checking if "no-preload-458006" exists ...
	I0127 15:40:33.267142 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.267186 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.284237 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34477
	I0127 15:40:33.284787 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.285432 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.285458 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.285817 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.286054 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.288006 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:40:33.288915 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36501
	I0127 15:40:33.289278 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44703
	I0127 15:40:33.289464 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.289551 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.290021 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.290033 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.290128 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.290135 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.290430 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.290487 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.290488 1074659 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 15:40:33.290680 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.290956 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.293313 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:40:33.293608 1074659 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 15:40:33.293756 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:40:33.295556 1074659 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 15:40:33.295557 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 15:40:33.295679 1074659 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 15:40:33.295688 1074659 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:40:32.977057 1074908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.901370931s)
	I0127 15:40:32.977156 1074908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:40:32.998093 1074908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:40:33.014544 1074908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:40:33.041108 1074908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:40:33.041138 1074908 kubeadm.go:157] found existing configuration files:
	
	I0127 15:40:33.041203 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:40:33.058390 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:40:33.058462 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:40:33.070074 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:40:33.087447 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:40:33.087524 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:40:33.101890 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:40:33.112384 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:40:33.112460 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:40:33.122774 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:40:33.133115 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:40:33.133183 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:40:33.143719 1074908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:40:33.201432 1074908 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 15:40:33.201519 1074908 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:40:33.371439 1074908 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:40:33.371619 1074908 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:40:33.371746 1074908 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 15:40:33.380800 1074908 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:40:28.505128 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:28.505170 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:31.047029 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:31.060582 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:31.060685 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:31.097127 1076050 cri.go:89] found id: ""
	I0127 15:40:31.097150 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.097160 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:31.097168 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:31.097230 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:31.134764 1076050 cri.go:89] found id: ""
	I0127 15:40:31.134799 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.134810 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:31.134818 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:31.134900 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:31.174779 1076050 cri.go:89] found id: ""
	I0127 15:40:31.174807 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.174816 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:31.174822 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:31.174875 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:31.215471 1076050 cri.go:89] found id: ""
	I0127 15:40:31.215503 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.215513 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:31.215519 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:31.215572 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:31.253765 1076050 cri.go:89] found id: ""
	I0127 15:40:31.253796 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.253804 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:31.253811 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:31.253867 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:31.297130 1076050 cri.go:89] found id: ""
	I0127 15:40:31.297161 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.297170 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:31.297176 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:31.297240 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:31.335280 1076050 cri.go:89] found id: ""
	I0127 15:40:31.335315 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.335326 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:31.335334 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:31.335406 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:31.372619 1076050 cri.go:89] found id: ""
	I0127 15:40:31.372652 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.372664 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:31.372678 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:31.372693 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:31.427666 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:31.427709 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:31.442810 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:31.442842 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:31.511297 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:31.511330 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:31.511354 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:31.595122 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:31.595168 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:33.383521 1074908 out.go:235]   - Generating certificates and keys ...
	I0127 15:40:33.383651 1074908 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:40:33.383757 1074908 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:40:33.383895 1074908 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:40:33.383985 1074908 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:40:33.384074 1074908 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:40:33.384147 1074908 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:40:33.384245 1074908 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:40:33.384323 1074908 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:40:33.384413 1074908 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:40:33.384510 1074908 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:40:33.384563 1074908 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:40:33.384642 1074908 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:40:33.553965 1074908 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:40:33.739507 1074908 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 15:40:33.994637 1074908 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:40:34.154265 1074908 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:40:34.373069 1074908 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:40:34.373791 1074908 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:40:34.379843 1074908 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:40:33.295709 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:40:33.297475 1074659 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 15:40:33.297501 1074659 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 15:40:33.297523 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:40:33.300714 1074659 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:40:33.300736 1074659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 15:40:33.300756 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:40:33.301635 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38107
	I0127 15:40:33.302333 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.302863 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.302880 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.303349 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.303970 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.304013 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.305284 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.305834 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:40:33.305864 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.306025 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:40:33.306086 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.306246 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:40:33.306406 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:40:33.306488 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.306592 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:40:33.309540 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:40:33.309565 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.309810 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:40:33.310021 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:40:33.310146 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:40:33.310163 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.310320 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:40:33.310404 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:40:33.310566 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:40:33.310593 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:40:33.310786 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:40:33.310945 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:40:33.329960 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0127 15:40:33.330745 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.331477 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.331497 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.331931 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.332248 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.334148 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:40:33.337343 1074659 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 15:40:33.337364 1074659 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 15:40:33.337387 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:40:33.344679 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.345163 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:40:33.345261 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.345521 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:40:33.345738 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:40:33.345938 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:40:33.346117 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:40:33.464899 1074659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:40:33.489798 1074659 node_ready.go:35] waiting up to 6m0s for node "no-preload-458006" to be "Ready" ...
	I0127 15:40:33.523407 1074659 node_ready.go:49] node "no-preload-458006" has status "Ready":"True"
	I0127 15:40:33.523440 1074659 node_ready.go:38] duration metric: took 33.61111ms for node "no-preload-458006" to be "Ready" ...
	I0127 15:40:33.523453 1074659 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:33.535257 1074659 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:33.568512 1074659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 15:40:33.587974 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 15:40:33.588003 1074659 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 15:40:33.619075 1074659 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 15:40:33.619099 1074659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 15:40:33.633023 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 15:40:33.633068 1074659 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 15:40:33.642970 1074659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:40:33.657566 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 15:40:33.657595 1074659 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 15:40:33.664558 1074659 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 15:40:33.664588 1074659 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 15:40:33.687856 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 15:40:33.687883 1074659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 15:40:33.714005 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 15:40:33.714036 1074659 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 15:40:33.727527 1074659 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:40:33.727554 1074659 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 15:40:33.764439 1074659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:40:33.790606 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 15:40:33.790639 1074659 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 15:40:33.826641 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:33.826674 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:33.827044 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:33.827065 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:33.827075 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:33.827083 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:33.827331 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:33.827363 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:33.827373 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:33.834226 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:33.834269 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:33.834561 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:33.834578 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:33.867815 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 15:40:33.867848 1074659 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 15:40:33.891318 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 15:40:33.891362 1074659 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 15:40:33.964578 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:40:33.964616 1074659 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 15:40:34.002418 1074659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:40:34.279743 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:34.279829 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:34.280331 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:34.280397 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:34.280425 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:34.280447 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:34.280473 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:34.280769 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:34.280818 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:34.280833 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:34.817958 1074659 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.053479215s)
	I0127 15:40:34.818069 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:34.818092 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:34.818435 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:34.818495 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:34.818509 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:34.818518 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:34.818778 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:34.818799 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:34.818811 1074659 addons.go:479] Verifying addon metrics-server=true in "no-preload-458006"
	I0127 15:40:35.547309 1074659 pod_ready.go:103] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:36.514576 1074659 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.512097478s)
	I0127 15:40:36.514647 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:36.514666 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:36.515033 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:36.515046 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:36.515111 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:36.515130 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:36.515153 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:36.515488 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:36.515527 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:36.515503 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:36.517645 1074659 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-458006 addons enable metrics-server
	
	I0127 15:40:36.519535 1074659 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 15:40:36.520964 1074659 addons.go:514] duration metric: took 3.279215802s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 15:40:34.138287 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:34.156651 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:34.156734 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:34.194604 1076050 cri.go:89] found id: ""
	I0127 15:40:34.194647 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.194658 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:34.194666 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:34.194729 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:34.233299 1076050 cri.go:89] found id: ""
	I0127 15:40:34.233353 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.233363 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:34.233369 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:34.233423 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:34.274424 1076050 cri.go:89] found id: ""
	I0127 15:40:34.274453 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.274465 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:34.274473 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:34.274539 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:34.317113 1076050 cri.go:89] found id: ""
	I0127 15:40:34.317144 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.317155 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:34.317168 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:34.317239 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:34.359212 1076050 cri.go:89] found id: ""
	I0127 15:40:34.359242 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.359252 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:34.359261 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:34.359328 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:34.398773 1076050 cri.go:89] found id: ""
	I0127 15:40:34.398805 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.398824 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:34.398833 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:34.398910 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:34.440053 1076050 cri.go:89] found id: ""
	I0127 15:40:34.440087 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.440099 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:34.440107 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:34.440178 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:34.482908 1076050 cri.go:89] found id: ""
	I0127 15:40:34.482943 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.482959 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:34.482973 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:34.482992 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:34.500178 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:34.500206 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:34.580251 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:34.580279 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:34.580302 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:34.673730 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:34.673772 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:34.720797 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:34.720838 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:37.282487 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:37.300162 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:37.300231 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:37.348753 1076050 cri.go:89] found id: ""
	I0127 15:40:37.348786 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.348798 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:37.348806 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:37.348870 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:37.398630 1076050 cri.go:89] found id: ""
	I0127 15:40:37.398669 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.398681 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:37.398689 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:37.398761 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:37.437030 1076050 cri.go:89] found id: ""
	I0127 15:40:37.437127 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.437155 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:37.437188 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:37.437277 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:37.477745 1076050 cri.go:89] found id: ""
	I0127 15:40:37.477837 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.477855 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:37.477864 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:37.477937 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:37.514259 1076050 cri.go:89] found id: ""
	I0127 15:40:37.514292 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.514302 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:37.514311 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:37.514385 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:37.551313 1076050 cri.go:89] found id: ""
	I0127 15:40:37.551349 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.551359 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:37.551367 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:37.551427 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:37.593740 1076050 cri.go:89] found id: ""
	I0127 15:40:37.593772 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.593783 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:37.593791 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:37.593854 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:37.634133 1076050 cri.go:89] found id: ""
	I0127 15:40:37.634169 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.634181 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:37.634194 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:37.634217 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:37.699046 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:37.699092 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:37.717470 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:37.717512 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:37.791051 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:37.791077 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:37.791106 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:37.882694 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:37.882742 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:34.381325 1074908 out.go:235]   - Booting up control plane ...
	I0127 15:40:34.381471 1074908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:40:34.381579 1074908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:40:34.382092 1074908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:40:34.406494 1074908 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:40:34.413899 1074908 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:40:34.414029 1074908 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:40:34.583151 1074908 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 15:40:34.583269 1074908 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 15:40:35.584905 1074908 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001687336s
	I0127 15:40:35.585033 1074908 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 15:40:40.587681 1074908 kubeadm.go:310] [api-check] The API server is healthy after 5.001284493s
	I0127 15:40:40.610814 1074908 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 15:40:40.631959 1074908 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 15:40:40.691115 1074908 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 15:40:40.691368 1074908 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-349782 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 15:40:40.717976 1074908 kubeadm.go:310] [bootstrap-token] Using token: 2miseq.yzn49d7krpbx0jxu
	I0127 15:40:40.719603 1074908 out.go:235]   - Configuring RBAC rules ...
	I0127 15:40:40.719764 1074908 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 15:40:40.734536 1074908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 15:40:40.754140 1074908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 15:40:40.763500 1074908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 15:40:40.769897 1074908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 15:40:40.777335 1074908 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 15:40:40.995105 1074908 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 15:40:41.449029 1074908 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 15:40:41.995223 1074908 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 15:40:41.996543 1074908 kubeadm.go:310] 
	I0127 15:40:41.996660 1074908 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 15:40:41.996672 1074908 kubeadm.go:310] 
	I0127 15:40:41.996788 1074908 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 15:40:41.996798 1074908 kubeadm.go:310] 
	I0127 15:40:41.996838 1074908 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 15:40:41.996921 1074908 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 15:40:41.996994 1074908 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 15:40:41.997025 1074908 kubeadm.go:310] 
	I0127 15:40:41.997151 1074908 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 15:40:41.997173 1074908 kubeadm.go:310] 
	I0127 15:40:41.997241 1074908 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 15:40:41.997253 1074908 kubeadm.go:310] 
	I0127 15:40:41.997329 1074908 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 15:40:41.997435 1074908 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 15:40:41.997539 1074908 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 15:40:41.997547 1074908 kubeadm.go:310] 
	I0127 15:40:41.997672 1074908 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 15:40:41.997777 1074908 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 15:40:41.997789 1074908 kubeadm.go:310] 
	I0127 15:40:41.997873 1074908 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2miseq.yzn49d7krpbx0jxu \
	I0127 15:40:41.997954 1074908 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 \
	I0127 15:40:41.997974 1074908 kubeadm.go:310] 	--control-plane 
	I0127 15:40:41.997980 1074908 kubeadm.go:310] 
	I0127 15:40:41.998045 1074908 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 15:40:41.998056 1074908 kubeadm.go:310] 
	I0127 15:40:41.998117 1074908 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2miseq.yzn49d7krpbx0jxu \
	I0127 15:40:41.998204 1074908 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 
	I0127 15:40:41.999397 1074908 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:40:41.999437 1074908 cni.go:84] Creating CNI manager for ""
	I0127 15:40:41.999448 1074908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:40:42.001383 1074908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 15:40:38.042609 1074659 pod_ready.go:103] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:40.046811 1074659 pod_ready.go:103] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:40.431585 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:40.449664 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:40.449766 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:40.500904 1076050 cri.go:89] found id: ""
	I0127 15:40:40.500995 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.501020 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:40.501029 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:40.501103 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:40.543907 1076050 cri.go:89] found id: ""
	I0127 15:40:40.543939 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.543950 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:40.543958 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:40.544018 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:40.592294 1076050 cri.go:89] found id: ""
	I0127 15:40:40.592328 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.592339 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:40.592352 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:40.592418 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:40.641396 1076050 cri.go:89] found id: ""
	I0127 15:40:40.641429 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.641439 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:40.641449 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:40.641522 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:40.687151 1076050 cri.go:89] found id: ""
	I0127 15:40:40.687185 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.687197 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:40.687206 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:40.687279 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:40.728537 1076050 cri.go:89] found id: ""
	I0127 15:40:40.728573 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.728584 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:40.728593 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:40.728666 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:40.770995 1076050 cri.go:89] found id: ""
	I0127 15:40:40.771022 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.771035 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:40.771042 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:40.771108 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:40.818299 1076050 cri.go:89] found id: ""
	I0127 15:40:40.818332 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.818344 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:40.818357 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:40.818379 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:40.835538 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:40.835566 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:40.912785 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:40.912812 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:40.912829 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:41.029124 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:41.029177 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:41.088618 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:41.088649 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:42.002886 1074908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 15:40:42.019774 1074908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 15:40:42.041710 1074908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 15:40:42.041880 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:42.042011 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-349782 minikube.k8s.io/updated_at=2025_01_27T15_40_42_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743 minikube.k8s.io/name=embed-certs-349782 minikube.k8s.io/primary=true
	I0127 15:40:42.071903 1074908 ops.go:34] apiserver oom_adj: -16
	I0127 15:40:42.298644 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:42.799727 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:43.299289 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:43.799485 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:44.299597 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:44.799559 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:45.299631 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:45.388381 1074908 kubeadm.go:1113] duration metric: took 3.346560313s to wait for elevateKubeSystemPrivileges
	I0127 15:40:45.388421 1074908 kubeadm.go:394] duration metric: took 5m3.554845692s to StartCluster
	I0127 15:40:45.388444 1074908 settings.go:142] acquiring lock: {Name:mk76722a8563186e0a733747d866232f054026c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:40:45.388536 1074908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:40:45.390768 1074908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:40:45.391081 1074908 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.43 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 15:40:45.391145 1074908 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 15:40:45.391269 1074908 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-349782"
	I0127 15:40:45.391288 1074908 addons.go:69] Setting dashboard=true in profile "embed-certs-349782"
	I0127 15:40:45.391320 1074908 addons.go:238] Setting addon dashboard=true in "embed-certs-349782"
	I0127 15:40:45.391319 1074908 addons.go:69] Setting metrics-server=true in profile "embed-certs-349782"
	I0127 15:40:45.391294 1074908 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-349782"
	I0127 15:40:45.391334 1074908 config.go:182] Loaded profile config "embed-certs-349782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:40:45.391343 1074908 addons.go:238] Setting addon metrics-server=true in "embed-certs-349782"
	W0127 15:40:45.391353 1074908 addons.go:247] addon metrics-server should already be in state true
	W0127 15:40:45.391330 1074908 addons.go:247] addon dashboard should already be in state true
	W0127 15:40:45.391338 1074908 addons.go:247] addon storage-provisioner should already be in state true
	I0127 15:40:45.391406 1074908 host.go:66] Checking if "embed-certs-349782" exists ...
	I0127 15:40:45.391417 1074908 host.go:66] Checking if "embed-certs-349782" exists ...
	I0127 15:40:45.391276 1074908 addons.go:69] Setting default-storageclass=true in profile "embed-certs-349782"
	I0127 15:40:45.391503 1074908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-349782"
	I0127 15:40:45.391386 1074908 host.go:66] Checking if "embed-certs-349782" exists ...
	I0127 15:40:45.391836 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.391838 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.391876 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.391925 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.391951 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.391954 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.391982 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.392044 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.396751 1074908 out.go:177] * Verifying Kubernetes components...
	I0127 15:40:45.398763 1074908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:40:45.411089 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33133
	I0127 15:40:45.411341 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0127 15:40:45.411740 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.411839 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.412321 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.412348 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.412429 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45519
	I0127 15:40:45.412455 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.412471 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.412710 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.412921 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.413145 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34947
	I0127 15:40:45.413359 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.413399 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.413439 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.413451 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.413623 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.413854 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.413991 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.414216 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.414233 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.414273 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.414298 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.414583 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.414766 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.414772 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.414845 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.418728 1074908 addons.go:238] Setting addon default-storageclass=true in "embed-certs-349782"
	W0127 15:40:45.418755 1074908 addons.go:247] addon default-storageclass should already be in state true
	I0127 15:40:45.418787 1074908 host.go:66] Checking if "embed-certs-349782" exists ...
	I0127 15:40:45.419153 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.419189 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.436563 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41273
	I0127 15:40:45.437032 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.437309 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44033
	I0127 15:40:45.437764 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.437783 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.437859 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38101
	I0127 15:40:45.437986 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.438180 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.438423 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.438439 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.438503 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.438549 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.439042 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.439059 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.439120 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.439496 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.439564 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.440296 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.440349 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.440835 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:40:45.441539 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41231
	I0127 15:40:45.442136 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.442687 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:40:45.443524 1074908 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 15:40:45.443584 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.443599 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.443863 1074908 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 15:40:45.443950 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.444664 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.445476 1074908 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 15:40:45.445498 1074908 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 15:40:45.445531 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:40:45.446460 1074908 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 15:40:45.446697 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:40:45.451306 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 15:40:45.456066 1074908 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 15:40:45.452788 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.456096 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:40:45.454144 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:40:45.456132 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:40:45.456169 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.456379 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:40:45.456396 1074908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:40:42.547331 1074659 pod_ready.go:103] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:44.081830 1074659 pod_ready.go:93] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.081865 1074659 pod_ready.go:82] duration metric: took 10.546579527s for pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.081882 1074659 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-xgx78" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.097962 1074659 pod_ready.go:93] pod "coredns-668d6bf9bc-xgx78" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.097994 1074659 pod_ready.go:82] duration metric: took 16.102725ms for pod "coredns-668d6bf9bc-xgx78" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.098014 1074659 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.117810 1074659 pod_ready.go:93] pod "etcd-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.117845 1074659 pod_ready.go:82] duration metric: took 19.821766ms for pod "etcd-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.117861 1074659 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.147522 1074659 pod_ready.go:93] pod "kube-apiserver-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.147557 1074659 pod_ready.go:82] duration metric: took 29.685956ms for pod "kube-apiserver-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.147573 1074659 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.163535 1074659 pod_ready.go:93] pod "kube-controller-manager-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.163570 1074659 pod_ready.go:82] duration metric: took 15.987018ms for pod "kube-controller-manager-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.163585 1074659 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6j6r5" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.440133 1074659 pod_ready.go:93] pod "kube-proxy-6j6r5" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.440165 1074659 pod_ready.go:82] duration metric: took 276.571766ms for pod "kube-proxy-6j6r5" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.440180 1074659 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.865610 1074659 pod_ready.go:93] pod "kube-scheduler-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.865643 1074659 pod_ready.go:82] duration metric: took 425.453541ms for pod "kube-scheduler-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.865655 1074659 pod_ready.go:39] duration metric: took 11.34218973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:44.865682 1074659 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:40:44.865746 1074659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:44.906758 1074659 api_server.go:72] duration metric: took 11.665005612s to wait for apiserver process to appear ...
	I0127 15:40:44.906793 1074659 api_server.go:88] waiting for apiserver healthz status ...
	I0127 15:40:44.906829 1074659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8443/healthz ...
	I0127 15:40:44.912296 1074659 api_server.go:279] https://192.168.50.30:8443/healthz returned 200:
	ok
	I0127 15:40:44.913396 1074659 api_server.go:141] control plane version: v1.32.1
	I0127 15:40:44.913416 1074659 api_server.go:131] duration metric: took 6.606206ms to wait for apiserver health ...
	I0127 15:40:44.913424 1074659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 15:40:45.045967 1074659 system_pods.go:59] 9 kube-system pods found
	I0127 15:40:45.046012 1074659 system_pods.go:61] "coredns-668d6bf9bc-sp7p4" [7fbb8eca-e2e6-4760-a0b6-8c6387fe9960] Running
	I0127 15:40:45.046020 1074659 system_pods.go:61] "coredns-668d6bf9bc-xgx78" [c3cc3887-d694-4b39-9ad1-c03fcf97b608] Running
	I0127 15:40:45.046025 1074659 system_pods.go:61] "etcd-no-preload-458006" [2474c045-aaa4-4190-8392-3dea1976ded1] Running
	I0127 15:40:45.046031 1074659 system_pods.go:61] "kube-apiserver-no-preload-458006" [2529a3ec-c6a0-4cc7-b93a-7964e435ada0] Running
	I0127 15:40:45.046038 1074659 system_pods.go:61] "kube-controller-manager-no-preload-458006" [989d2483-4dc3-4add-ad64-7f76d4b5c765] Running
	I0127 15:40:45.046043 1074659 system_pods.go:61] "kube-proxy-6j6r5" [3ca06a87-654b-42c2-ac04-12d9b0472973] Running
	I0127 15:40:45.046047 1074659 system_pods.go:61] "kube-scheduler-no-preload-458006" [f6afe797-0eed-4f54-8ed6-fbe75d411b7a] Running
	I0127 15:40:45.046056 1074659 system_pods.go:61] "metrics-server-f79f97bbb-k7879" [137f45e8-cf1d-404b-af06-4b99a257450f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 15:40:45.046063 1074659 system_pods.go:61] "storage-provisioner" [8e874460-b5bf-4ce6-b1ca-9c188b1fd4e6] Running
	I0127 15:40:45.046074 1074659 system_pods.go:74] duration metric: took 132.642132ms to wait for pod list to return data ...
	I0127 15:40:45.046089 1074659 default_sa.go:34] waiting for default service account to be created ...
	I0127 15:40:45.246663 1074659 default_sa.go:45] found service account: "default"
	I0127 15:40:45.246694 1074659 default_sa.go:55] duration metric: took 200.600423ms for default service account to be created ...
	I0127 15:40:45.246707 1074659 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 15:40:45.449871 1074659 system_pods.go:87] 9 kube-system pods found
	I0127 15:40:43.646818 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:43.660154 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:43.660237 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:43.698517 1076050 cri.go:89] found id: ""
	I0127 15:40:43.698548 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.698557 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:43.698563 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:43.698624 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:43.736919 1076050 cri.go:89] found id: ""
	I0127 15:40:43.736954 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.736967 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:43.736978 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:43.737064 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:43.777333 1076050 cri.go:89] found id: ""
	I0127 15:40:43.777369 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.777382 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:43.777391 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:43.777462 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:43.817427 1076050 cri.go:89] found id: ""
	I0127 15:40:43.817460 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.817471 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:43.817480 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:43.817546 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:43.866498 1076050 cri.go:89] found id: ""
	I0127 15:40:43.866527 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.866538 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:43.866546 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:43.866616 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:43.919477 1076050 cri.go:89] found id: ""
	I0127 15:40:43.919510 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.919521 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:43.919530 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:43.919593 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:43.958203 1076050 cri.go:89] found id: ""
	I0127 15:40:43.958242 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.958261 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:43.958270 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:43.958340 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:43.996729 1076050 cri.go:89] found id: ""
	I0127 15:40:43.996760 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.996769 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:43.996779 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:43.996792 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:44.051707 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:44.051748 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:44.069643 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:44.069674 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:44.146464 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:44.146489 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:44.146505 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:44.230654 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:44.230696 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:46.788290 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:46.807855 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:46.807942 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:46.861569 1076050 cri.go:89] found id: ""
	I0127 15:40:46.861596 1076050 logs.go:282] 0 containers: []
	W0127 15:40:46.861608 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:46.861615 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:46.861684 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:46.919686 1076050 cri.go:89] found id: ""
	I0127 15:40:46.919719 1076050 logs.go:282] 0 containers: []
	W0127 15:40:46.919732 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:46.919741 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:46.919810 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:46.959359 1076050 cri.go:89] found id: ""
	I0127 15:40:46.959419 1076050 logs.go:282] 0 containers: []
	W0127 15:40:46.959432 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:46.959440 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:46.959503 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:47.000445 1076050 cri.go:89] found id: ""
	I0127 15:40:47.000489 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.000503 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:47.000512 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:47.000583 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:47.041395 1076050 cri.go:89] found id: ""
	I0127 15:40:47.041426 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.041440 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:47.041449 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:47.041512 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:47.086753 1076050 cri.go:89] found id: ""
	I0127 15:40:47.086787 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.086800 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:47.086808 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:47.086883 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:47.128760 1076050 cri.go:89] found id: ""
	I0127 15:40:47.128788 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.128799 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:47.128807 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:47.128876 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:47.173743 1076050 cri.go:89] found id: ""
	I0127 15:40:47.173779 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.173791 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:47.173804 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:47.173818 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:47.280755 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:47.280817 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:47.343245 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:47.343291 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:47.425229 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:47.425282 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:47.446605 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:47.446649 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:47.563807 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:45.456519 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:40:45.456939 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:40:45.457981 1074908 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:40:45.458002 1074908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 15:40:45.458020 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:40:45.460172 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.460862 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:40:45.460921 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.461259 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:40:45.461487 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:40:45.461715 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:40:45.461874 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:40:45.462195 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.462273 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:40:45.462309 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.462659 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:40:45.462819 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:40:45.462924 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:40:45.463019 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:40:45.464793 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36397
	I0127 15:40:45.465301 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.465795 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.465815 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.468906 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.469208 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.471230 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:40:45.471522 1074908 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 15:40:45.471538 1074908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 15:40:45.471562 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:40:45.474700 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.475171 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:40:45.475203 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.475388 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:40:45.475596 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:40:45.475722 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:40:45.475899 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:40:45.617662 1074908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:40:45.639438 1074908 node_ready.go:35] waiting up to 6m0s for node "embed-certs-349782" to be "Ready" ...
	I0127 15:40:45.668405 1074908 node_ready.go:49] node "embed-certs-349782" has status "Ready":"True"
	I0127 15:40:45.668432 1074908 node_ready.go:38] duration metric: took 28.956722ms for node "embed-certs-349782" to be "Ready" ...
	I0127 15:40:45.668451 1074908 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:45.676760 1074908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:45.743936 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 15:40:45.743967 1074908 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 15:40:45.755731 1074908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:40:45.759201 1074908 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 15:40:45.759233 1074908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 15:40:45.772228 1074908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 15:40:45.805739 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 15:40:45.805773 1074908 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 15:40:45.823459 1074908 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 15:40:45.823500 1074908 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 15:40:45.854823 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 15:40:45.854859 1074908 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 15:40:45.891284 1074908 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:40:45.891327 1074908 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 15:40:45.931396 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 15:40:45.931431 1074908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 15:40:46.015320 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 15:40:46.015360 1074908 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 15:40:46.015364 1074908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:40:46.083527 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 15:40:46.083563 1074908 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 15:40:46.246566 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 15:40:46.246597 1074908 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 15:40:46.376290 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 15:40:46.376329 1074908 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 15:40:46.427597 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:40:46.427631 1074908 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 15:40:46.482003 1074908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:40:47.410166 1074908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.637893772s)
	I0127 15:40:47.410259 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.410166 1074908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.654370109s)
	I0127 15:40:47.410282 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.410349 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.410372 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.410843 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.410875 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.412611 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.412628 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.412638 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.412646 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.412761 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.412798 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.412830 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.412850 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.412903 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.413172 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.413266 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.413342 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.414418 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.414437 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.474683 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.474722 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.475077 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.475151 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.475172 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.777164 1074908 pod_ready.go:103] pod "etcd-embed-certs-349782" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:47.977107 1074908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.961691521s)
	I0127 15:40:47.977187 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.977203 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.977515 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.977556 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.977595 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.977608 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.977619 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.977883 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.977933 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.977955 1074908 addons.go:479] Verifying addon metrics-server=true in "embed-certs-349782"
	I0127 15:40:47.977965 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:49.266293 1074908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.7842336s)
	I0127 15:40:49.266371 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:49.266386 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:49.266731 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:49.266754 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:49.266771 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:49.266779 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:49.267033 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:49.267086 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:49.267106 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:49.268778 1074908 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-349782 addons enable metrics-server
	
	I0127 15:40:49.270188 1074908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 15:40:52.460023 1075160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.936714261s)
	I0127 15:40:52.460128 1075160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:40:52.476845 1075160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:40:52.487966 1075160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:40:52.499961 1075160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:40:52.499988 1075160 kubeadm.go:157] found existing configuration files:
	
	I0127 15:40:52.500037 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 15:40:52.511034 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:40:52.511115 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:40:52.524517 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 15:40:52.534966 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:40:52.535048 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:40:52.545245 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 15:40:52.555070 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:40:52.555149 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:40:52.569605 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 15:40:52.581711 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:40:52.581794 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:40:52.592228 1075160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:40:52.654498 1075160 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 15:40:52.654647 1075160 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:40:52.779741 1075160 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:40:52.779912 1075160 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:40:52.780069 1075160 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 15:40:52.790096 1075160 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:40:50.064460 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:50.080142 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:50.080219 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:50.120604 1076050 cri.go:89] found id: ""
	I0127 15:40:50.120643 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.120655 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:50.120661 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:50.120716 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:50.161728 1076050 cri.go:89] found id: ""
	I0127 15:40:50.161766 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.161777 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:50.161785 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:50.161851 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:50.199247 1076050 cri.go:89] found id: ""
	I0127 15:40:50.199275 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.199286 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:50.199293 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:50.199369 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:50.246623 1076050 cri.go:89] found id: ""
	I0127 15:40:50.246652 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.246663 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:50.246672 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:50.246742 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:50.284077 1076050 cri.go:89] found id: ""
	I0127 15:40:50.284111 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.284123 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:50.284132 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:50.284200 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:50.326481 1076050 cri.go:89] found id: ""
	I0127 15:40:50.326518 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.326530 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:50.326539 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:50.326597 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:50.364165 1076050 cri.go:89] found id: ""
	I0127 15:40:50.364198 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.364210 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:50.364218 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:50.364280 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:50.402527 1076050 cri.go:89] found id: ""
	I0127 15:40:50.402560 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.402572 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:50.402586 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:50.402602 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:50.485370 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:50.485412 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:50.539508 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:50.539547 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:50.591618 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:50.591656 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:50.609824 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:50.609873 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:50.694094 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:53.194813 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:53.211192 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:53.211271 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:53.258010 1076050 cri.go:89] found id: ""
	I0127 15:40:53.258042 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.258060 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:53.258069 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:53.258138 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:53.297402 1076050 cri.go:89] found id: ""
	I0127 15:40:53.297430 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.297440 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:53.297448 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:53.297511 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:53.336412 1076050 cri.go:89] found id: ""
	I0127 15:40:53.336440 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.336450 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:53.336457 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:53.336526 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:53.383904 1076050 cri.go:89] found id: ""
	I0127 15:40:53.383939 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.383950 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:53.383959 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:53.384031 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:53.435476 1076050 cri.go:89] found id: ""
	I0127 15:40:53.435512 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.435525 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:53.435533 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:53.435604 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:49.271495 1074908 addons.go:514] duration metric: took 3.880366443s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 15:40:50.196894 1074908 pod_ready.go:103] pod "etcd-embed-certs-349782" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:51.684593 1074908 pod_ready.go:93] pod "etcd-embed-certs-349782" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:51.684619 1074908 pod_ready.go:82] duration metric: took 6.007831808s for pod "etcd-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:51.684632 1074908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:51.693065 1074908 pod_ready.go:93] pod "kube-apiserver-embed-certs-349782" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:51.693095 1074908 pod_ready.go:82] duration metric: took 8.4536ms for pod "kube-apiserver-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:51.693110 1074908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:52.703593 1074908 pod_ready.go:93] pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:52.703626 1074908 pod_ready.go:82] duration metric: took 1.010507584s for pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:52.703641 1074908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:53.710652 1074908 pod_ready.go:93] pod "kube-scheduler-embed-certs-349782" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:53.710683 1074908 pod_ready.go:82] duration metric: took 1.007031836s for pod "kube-scheduler-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:53.710695 1074908 pod_ready.go:39] duration metric: took 8.042232456s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:53.710716 1074908 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:40:53.710780 1074908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:53.771554 1074908 api_server.go:72] duration metric: took 8.380427434s to wait for apiserver process to appear ...
	I0127 15:40:53.771585 1074908 api_server.go:88] waiting for apiserver healthz status ...
	I0127 15:40:53.771611 1074908 api_server.go:253] Checking apiserver healthz at https://192.168.61.43:8443/healthz ...
	I0127 15:40:53.779085 1074908 api_server.go:279] https://192.168.61.43:8443/healthz returned 200:
	ok
	I0127 15:40:53.780297 1074908 api_server.go:141] control plane version: v1.32.1
	I0127 15:40:53.780325 1074908 api_server.go:131] duration metric: took 8.731633ms to wait for apiserver health ...
	I0127 15:40:53.780335 1074908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 15:40:53.788343 1074908 system_pods.go:59] 9 kube-system pods found
	I0127 15:40:53.788373 1074908 system_pods.go:61] "coredns-668d6bf9bc-2ggkc" [ae4bf072-7cfb-4a26-8c71-abd3cbc52c28] Running
	I0127 15:40:53.788380 1074908 system_pods.go:61] "coredns-668d6bf9bc-h92kp" [5c29333b-4ea9-44fa-8be6-c350e6b709fe] Running
	I0127 15:40:53.788384 1074908 system_pods.go:61] "etcd-embed-certs-349782" [fcb552ae-bb9e-49de-a183-a26f8cac7e56] Running
	I0127 15:40:53.788388 1074908 system_pods.go:61] "kube-apiserver-embed-certs-349782" [5161cdd2-9cea-4b6d-9023-b20f56e14d9c] Running
	I0127 15:40:53.788392 1074908 system_pods.go:61] "kube-controller-manager-embed-certs-349782" [defbaf3b-e25a-4e20-a602-4be47bd2cc4b] Running
	I0127 15:40:53.788395 1074908 system_pods.go:61] "kube-proxy-vhpzl" [1bb477a3-24b0-4a0e-9bf1-ce5794d2cdbf] Running
	I0127 15:40:53.788398 1074908 system_pods.go:61] "kube-scheduler-embed-certs-349782" [ed785153-6f53-4289-a191-5545960c300f] Running
	I0127 15:40:53.788404 1074908 system_pods.go:61] "metrics-server-f79f97bbb-pnbcx" [af453586-d131-4ba7-aa9f-290eb044d58e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 15:40:53.788411 1074908 system_pods.go:61] "storage-provisioner" [e5c6e59a-52ab-4707-a438-5d01890928db] Running
	I0127 15:40:53.788422 1074908 system_pods.go:74] duration metric: took 8.079129ms to wait for pod list to return data ...
	I0127 15:40:53.788430 1074908 default_sa.go:34] waiting for default service account to be created ...
	I0127 15:40:52.793113 1075160 out.go:235]   - Generating certificates and keys ...
	I0127 15:40:52.793243 1075160 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:40:52.793339 1075160 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:40:52.793480 1075160 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:40:52.793582 1075160 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:40:52.793692 1075160 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:40:52.793783 1075160 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:40:52.793875 1075160 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:40:52.793966 1075160 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:40:52.794100 1075160 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:40:52.794204 1075160 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:40:52.794273 1075160 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:40:52.794363 1075160 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:40:52.989346 1075160 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:40:53.518286 1075160 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 15:40:53.684220 1075160 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:40:53.833269 1075160 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:40:53.959433 1075160 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:40:53.959944 1075160 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:40:53.962645 1075160 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:40:53.964848 1075160 out.go:235]   - Booting up control plane ...
	I0127 15:40:53.964986 1075160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:40:53.965139 1075160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:40:53.967441 1075160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:40:53.990143 1075160 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:40:53.997601 1075160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:40:53.997684 1075160 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:40:54.175814 1075160 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 15:40:54.175985 1075160 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 15:40:54.677251 1075160 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.539769ms
	I0127 15:40:54.677364 1075160 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 15:40:53.477359 1076050 cri.go:89] found id: ""
	I0127 15:40:53.477389 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.477400 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:53.477408 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:53.477473 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:53.522739 1076050 cri.go:89] found id: ""
	I0127 15:40:53.522777 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.522789 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:53.522798 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:53.522870 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:53.591524 1076050 cri.go:89] found id: ""
	I0127 15:40:53.591556 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.591568 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:53.591581 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:53.591601 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:53.645459 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:53.645495 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:53.662522 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:53.662551 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:53.743915 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:53.743940 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:53.743957 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:53.844477 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:53.844511 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:56.390836 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:56.404803 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:56.404892 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:56.448556 1076050 cri.go:89] found id: ""
	I0127 15:40:56.448586 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.448597 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:56.448606 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:56.448674 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:56.501798 1076050 cri.go:89] found id: ""
	I0127 15:40:56.501833 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.501854 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:56.501863 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:56.501932 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:56.549831 1076050 cri.go:89] found id: ""
	I0127 15:40:56.549882 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.549895 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:56.549904 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:56.549976 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:56.604199 1076050 cri.go:89] found id: ""
	I0127 15:40:56.604236 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.604248 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:56.604258 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:56.604361 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:56.662492 1076050 cri.go:89] found id: ""
	I0127 15:40:56.662529 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.662540 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:56.662550 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:56.662621 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:56.712694 1076050 cri.go:89] found id: ""
	I0127 15:40:56.712731 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.712743 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:56.712752 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:56.712821 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:56.759321 1076050 cri.go:89] found id: ""
	I0127 15:40:56.759355 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.759366 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:56.759375 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:56.759441 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:56.806457 1076050 cri.go:89] found id: ""
	I0127 15:40:56.806487 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.806499 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:56.806511 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:56.806528 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:56.885361 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:56.885416 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:56.904333 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:56.904390 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:57.003794 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:57.003820 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:57.003845 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:57.107181 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:57.107240 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:53.791640 1074908 default_sa.go:45] found service account: "default"
	I0127 15:40:53.791671 1074908 default_sa.go:55] duration metric: took 3.229036ms for default service account to be created ...
	I0127 15:40:53.791682 1074908 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 15:40:53.798897 1074908 system_pods.go:87] 9 kube-system pods found
	I0127 15:41:00.679789 1075160 kubeadm.go:310] [api-check] The API server is healthy after 6.002206079s
	I0127 15:41:00.695507 1075160 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 15:41:00.712356 1075160 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 15:41:00.738343 1075160 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 15:41:00.738640 1075160 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-912913 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 15:41:00.753238 1075160 kubeadm.go:310] [bootstrap-token] Using token: 5gsmwo.93b5mx0ng9gboctz
	I0127 15:41:00.754589 1075160 out.go:235]   - Configuring RBAC rules ...
	I0127 15:41:00.754718 1075160 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 15:41:00.773508 1075160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 15:41:00.781170 1075160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 15:41:00.784358 1075160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 15:41:00.787629 1075160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 15:41:00.790904 1075160 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 15:41:01.087298 1075160 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 15:41:01.539193 1075160 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 15:41:02.088850 1075160 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 15:41:02.089949 1075160 kubeadm.go:310] 
	I0127 15:41:02.090088 1075160 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 15:41:02.090112 1075160 kubeadm.go:310] 
	I0127 15:41:02.090212 1075160 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 15:41:02.090222 1075160 kubeadm.go:310] 
	I0127 15:41:02.090256 1075160 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 15:41:02.090363 1075160 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 15:41:02.090438 1075160 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 15:41:02.090447 1075160 kubeadm.go:310] 
	I0127 15:41:02.090529 1075160 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 15:41:02.090542 1075160 kubeadm.go:310] 
	I0127 15:41:02.090605 1075160 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 15:41:02.090612 1075160 kubeadm.go:310] 
	I0127 15:41:02.090674 1075160 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 15:41:02.090813 1075160 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 15:41:02.090903 1075160 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 15:41:02.090913 1075160 kubeadm.go:310] 
	I0127 15:41:02.091020 1075160 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 15:41:02.091116 1075160 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 15:41:02.091126 1075160 kubeadm.go:310] 
	I0127 15:41:02.091223 1075160 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 5gsmwo.93b5mx0ng9gboctz \
	I0127 15:41:02.091357 1075160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 \
	I0127 15:41:02.091383 1075160 kubeadm.go:310] 	--control-plane 
	I0127 15:41:02.091393 1075160 kubeadm.go:310] 
	I0127 15:41:02.091482 1075160 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 15:41:02.091490 1075160 kubeadm.go:310] 
	I0127 15:41:02.091576 1075160 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5gsmwo.93b5mx0ng9gboctz \
	I0127 15:41:02.091686 1075160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 
	I0127 15:41:02.093055 1075160 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:41:02.093120 1075160 cni.go:84] Creating CNI manager for ""
	I0127 15:41:02.093134 1075160 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:41:02.095065 1075160 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 15:41:02.096511 1075160 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 15:41:02.110508 1075160 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 15:41:02.132628 1075160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 15:41:02.132723 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:02.132745 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-912913 minikube.k8s.io/updated_at=2025_01_27T15_41_02_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743 minikube.k8s.io/name=default-k8s-diff-port-912913 minikube.k8s.io/primary=true
	I0127 15:41:02.380721 1075160 ops.go:34] apiserver oom_adj: -16
	I0127 15:41:02.380856 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:59.656976 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:59.675626 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:59.675762 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:59.719313 1076050 cri.go:89] found id: ""
	I0127 15:40:59.719343 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.719351 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:59.719357 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:59.719441 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:59.758380 1076050 cri.go:89] found id: ""
	I0127 15:40:59.758419 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.758433 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:59.758441 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:59.758511 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:59.802754 1076050 cri.go:89] found id: ""
	I0127 15:40:59.802787 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.802798 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:59.802806 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:59.802874 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:59.847665 1076050 cri.go:89] found id: ""
	I0127 15:40:59.847695 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.847707 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:59.847716 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:59.847781 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:59.888840 1076050 cri.go:89] found id: ""
	I0127 15:40:59.888867 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.888875 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:59.888882 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:59.888946 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:59.935416 1076050 cri.go:89] found id: ""
	I0127 15:40:59.935448 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.935460 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:59.935468 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:59.935544 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:59.982418 1076050 cri.go:89] found id: ""
	I0127 15:40:59.982448 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.982456 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:59.982464 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:59.982539 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:00.024752 1076050 cri.go:89] found id: ""
	I0127 15:41:00.024794 1076050 logs.go:282] 0 containers: []
	W0127 15:41:00.024806 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:00.024820 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:00.024839 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:00.044330 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:00.044369 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:00.130115 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:00.130216 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:00.130241 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:00.236534 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:00.236585 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:00.312265 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:00.312307 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:02.873155 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:02.889623 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:02.889689 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:02.931491 1076050 cri.go:89] found id: ""
	I0127 15:41:02.931528 1076050 logs.go:282] 0 containers: []
	W0127 15:41:02.931537 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:02.931546 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:02.931615 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:02.968872 1076050 cri.go:89] found id: ""
	I0127 15:41:02.968912 1076050 logs.go:282] 0 containers: []
	W0127 15:41:02.968924 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:02.968932 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:02.969030 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:03.004397 1076050 cri.go:89] found id: ""
	I0127 15:41:03.004428 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.004437 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:03.004443 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:03.004498 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:03.042909 1076050 cri.go:89] found id: ""
	I0127 15:41:03.042937 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.042948 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:03.042955 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:03.043020 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:03.081525 1076050 cri.go:89] found id: ""
	I0127 15:41:03.081556 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.081567 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:03.081576 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:03.081645 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:03.122741 1076050 cri.go:89] found id: ""
	I0127 15:41:03.122773 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.122784 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:03.122793 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:03.122855 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:03.159043 1076050 cri.go:89] found id: ""
	I0127 15:41:03.159069 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.159077 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:03.159090 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:03.159140 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:03.200367 1076050 cri.go:89] found id: ""
	I0127 15:41:03.200402 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.200414 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:03.200429 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:03.200447 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:03.291239 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:03.291291 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:03.336057 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:03.336098 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:03.395428 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:03.395480 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:03.411878 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:03.411911 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 15:41:02.881961 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:03.381153 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:03.881177 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:04.381381 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:04.881601 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:05.381394 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:05.881197 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:05.963844 1075160 kubeadm.go:1113] duration metric: took 3.831201657s to wait for elevateKubeSystemPrivileges
	I0127 15:41:05.963884 1075160 kubeadm.go:394] duration metric: took 5m3.006407652s to StartCluster
	I0127 15:41:05.963905 1075160 settings.go:142] acquiring lock: {Name:mk76722a8563186e0a733747d866232f054026c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:41:05.964014 1075160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:41:05.966708 1075160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:41:05.967090 1075160 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.160 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 15:41:05.967165 1075160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 15:41:05.967282 1075160 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-912913"
	I0127 15:41:05.967302 1075160 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-912913"
	W0127 15:41:05.967308 1075160 addons.go:247] addon storage-provisioner should already be in state true
	I0127 15:41:05.967326 1075160 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-912913"
	I0127 15:41:05.967343 1075160 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-912913"
	I0127 15:41:05.967355 1075160 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-912913"
	I0127 15:41:05.967358 1075160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-912913"
	I0127 15:41:05.967357 1075160 config.go:182] Loaded profile config "default-k8s-diff-port-912913": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:41:05.967356 1075160 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-912913"
	I0127 15:41:05.967381 1075160 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-912913"
	W0127 15:41:05.967390 1075160 addons.go:247] addon dashboard should already be in state true
	W0127 15:41:05.967362 1075160 addons.go:247] addon metrics-server should already be in state true
	I0127 15:41:05.967334 1075160 host.go:66] Checking if "default-k8s-diff-port-912913" exists ...
	I0127 15:41:05.967433 1075160 host.go:66] Checking if "default-k8s-diff-port-912913" exists ...
	I0127 15:41:05.967433 1075160 host.go:66] Checking if "default-k8s-diff-port-912913" exists ...
	I0127 15:41:05.967803 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.967829 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.967842 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.967854 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.967866 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.967894 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.967857 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.967954 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.968953 1075160 out.go:177] * Verifying Kubernetes components...
	I0127 15:41:05.970726 1075160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:41:05.986076 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37879
	I0127 15:41:05.986613 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:05.987340 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:05.987367 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:05.987696 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40949
	I0127 15:41:05.987879 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35511
	I0127 15:41:05.987883 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45871
	I0127 15:41:05.987924 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:05.988072 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:05.988235 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:05.988485 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:05.988597 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.988641 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.988725 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:05.988745 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:05.988760 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:05.988775 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:05.989142 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:05.989164 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:05.989172 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:05.989192 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:05.989534 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:05.989721 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:05.989770 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.989789 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.989815 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.989827 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.993646 1075160 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-912913"
	W0127 15:41:05.993672 1075160 addons.go:247] addon default-storageclass should already be in state true
	I0127 15:41:05.993703 1075160 host.go:66] Checking if "default-k8s-diff-port-912913" exists ...
	I0127 15:41:05.994089 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.994137 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:06.007391 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36511
	I0127 15:41:06.007784 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39871
	I0127 15:41:06.008229 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.008327 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.008859 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.008880 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.008951 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I0127 15:41:06.009182 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.009201 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.009660 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.009740 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.009876 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:06.010328 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.010393 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.010588 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.010748 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:06.010833 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.025187 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:06.025199 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:41:06.025187 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:41:06.025186 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46703
	I0127 15:41:06.037186 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:41:06.037801 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.038419 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.038439 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.038833 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.039733 1075160 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 15:41:06.039865 1075160 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:41:06.039911 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:06.039947 1075160 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 15:41:06.039975 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:06.041831 1075160 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 15:41:06.041853 1075160 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 15:41:06.041887 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:41:06.042817 1075160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:41:06.042833 1075160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 15:41:06.042854 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:41:06.045474 1075160 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 15:41:06.047233 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.047253 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 15:41:06.047270 1075160 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 15:41:06.047294 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:41:06.047965 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:41:06.048037 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.048421 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:41:06.048675 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:41:06.049034 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:41:06.049616 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:41:06.051299 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.051321 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.051717 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:41:06.051739 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.052033 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:41:06.052054 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.052088 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:41:06.052323 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:41:06.052372 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:41:06.052526 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:41:06.052702 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:41:06.057244 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:41:06.057489 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:41:06.057880 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:41:06.058959 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39803
	I0127 15:41:06.059421 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.059854 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.059866 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.060259 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.060421 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:06.062233 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:41:06.062753 1075160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 15:41:06.062767 1075160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 15:41:06.062781 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:41:06.067605 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.068014 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:41:06.068027 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.068243 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:41:06.068368 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:41:06.068559 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:41:06.068695 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:41:06.211887 1075160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:41:06.257549 1075160 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-912913" to be "Ready" ...
	I0127 15:41:06.305423 1075160 node_ready.go:49] node "default-k8s-diff-port-912913" has status "Ready":"True"
	I0127 15:41:06.305459 1075160 node_ready.go:38] duration metric: took 47.864404ms for node "default-k8s-diff-port-912913" to be "Ready" ...
	I0127 15:41:06.305474 1075160 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:41:06.311746 1075160 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 15:41:06.311780 1075160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 15:41:06.329198 1075160 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:06.374086 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 15:41:06.374119 1075160 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 15:41:06.377742 1075160 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 15:41:06.377771 1075160 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 15:41:06.400332 1075160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:41:06.403004 1075160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 15:41:06.430195 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 15:41:06.430217 1075160 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 15:41:06.487574 1075160 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:41:06.487605 1075160 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 15:41:06.529999 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 15:41:06.530054 1075160 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 15:41:06.609758 1075160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:41:06.619520 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 15:41:06.619567 1075160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 15:41:06.795826 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 15:41:06.795870 1075160 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 15:41:06.889910 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 15:41:06.889940 1075160 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 15:41:06.979355 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 15:41:06.979391 1075160 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 15:41:07.053404 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 15:41:07.053438 1075160 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 15:41:07.101199 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:41:07.101235 1075160 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 15:41:07.165859 1075160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:41:07.419725 1075160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.016680012s)
	I0127 15:41:07.419820 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.419839 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.419841 1075160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.019463574s)
	I0127 15:41:07.419916 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.419939 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.420292 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.420306 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.420322 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.420352 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.420365 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.420366 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.420492 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.420521 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.420530 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.420538 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.420775 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.420779 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.420786 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.420814 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.420842 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.420849 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.438640 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.438681 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.439056 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.439081 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.439091 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	W0127 15:41:03.498183 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:06.000178 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:06.024915 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:06.024973 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:06.098332 1076050 cri.go:89] found id: ""
	I0127 15:41:06.098361 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.098369 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:06.098375 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:06.098430 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:06.156082 1076050 cri.go:89] found id: ""
	I0127 15:41:06.156117 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.156129 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:06.156137 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:06.156203 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:06.217204 1076050 cri.go:89] found id: ""
	I0127 15:41:06.217235 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.217246 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:06.217255 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:06.217331 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:06.259003 1076050 cri.go:89] found id: ""
	I0127 15:41:06.259029 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.259041 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:06.259048 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:06.259123 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:06.298292 1076050 cri.go:89] found id: ""
	I0127 15:41:06.298330 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.298341 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:06.298349 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:06.298416 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:06.339173 1076050 cri.go:89] found id: ""
	I0127 15:41:06.339211 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.339224 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:06.339234 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:06.339309 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:06.381271 1076050 cri.go:89] found id: ""
	I0127 15:41:06.381300 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.381311 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:06.381320 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:06.381385 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:06.429073 1076050 cri.go:89] found id: ""
	I0127 15:41:06.429134 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.429149 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:06.429164 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:06.429187 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:06.491509 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:06.491545 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:06.507964 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:06.508011 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:06.589122 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:06.589158 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:06.589173 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:06.668992 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:06.669051 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:07.791715 1075160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.18189835s)
	I0127 15:41:07.791796 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.791813 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.792148 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.792170 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.792181 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.792190 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.792522 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.792570 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.792580 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.792591 1075160 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-912913"
	I0127 15:41:08.375027 1075160 pod_ready.go:103] pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"False"
	I0127 15:41:08.535318 1075160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.369395363s)
	I0127 15:41:08.535382 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:08.535398 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:08.535779 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:08.535833 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:08.535847 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:08.535857 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:08.536129 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:08.536152 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:08.537800 1075160 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-912913 addons enable metrics-server
	
	I0127 15:41:08.539323 1075160 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 15:41:08.540713 1075160 addons.go:514] duration metric: took 2.57355558s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 15:41:10.869256 1075160 pod_ready.go:103] pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"False"
	I0127 15:41:09.224594 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:09.239525 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:09.239616 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:09.285116 1076050 cri.go:89] found id: ""
	I0127 15:41:09.285160 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.285172 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:09.285182 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:09.285252 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:09.342278 1076050 cri.go:89] found id: ""
	I0127 15:41:09.342307 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.342323 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:09.342332 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:09.342397 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:09.385479 1076050 cri.go:89] found id: ""
	I0127 15:41:09.385506 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.385515 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:09.385521 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:09.385580 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:09.426386 1076050 cri.go:89] found id: ""
	I0127 15:41:09.426426 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.426439 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:09.426448 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:09.426516 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:09.468739 1076050 cri.go:89] found id: ""
	I0127 15:41:09.468776 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.468789 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:09.468798 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:09.468866 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:09.510885 1076050 cri.go:89] found id: ""
	I0127 15:41:09.510918 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.510931 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:09.510939 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:09.511007 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:09.548406 1076050 cri.go:89] found id: ""
	I0127 15:41:09.548442 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.548455 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:09.548464 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:09.548547 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:09.589727 1076050 cri.go:89] found id: ""
	I0127 15:41:09.589761 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.589773 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:09.589786 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:09.589802 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:09.641717 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:09.641759 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:09.712152 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:09.712220 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:09.730069 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:09.730119 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:09.808412 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:09.808447 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:09.808462 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:12.421654 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:12.440156 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:12.440298 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:12.489759 1076050 cri.go:89] found id: ""
	I0127 15:41:12.489788 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.489800 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:12.489809 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:12.489887 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:12.540068 1076050 cri.go:89] found id: ""
	I0127 15:41:12.540099 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.540108 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:12.540114 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:12.540178 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:12.587471 1076050 cri.go:89] found id: ""
	I0127 15:41:12.587497 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.587505 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:12.587511 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:12.587578 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:12.638634 1076050 cri.go:89] found id: ""
	I0127 15:41:12.638668 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.638680 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:12.638689 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:12.638762 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:12.683784 1076050 cri.go:89] found id: ""
	I0127 15:41:12.683815 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.683826 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:12.683837 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:12.683900 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:12.720438 1076050 cri.go:89] found id: ""
	I0127 15:41:12.720479 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.720488 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:12.720495 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:12.720548 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:12.759175 1076050 cri.go:89] found id: ""
	I0127 15:41:12.759207 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.759219 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:12.759226 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:12.759290 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:12.792624 1076050 cri.go:89] found id: ""
	I0127 15:41:12.792656 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.792668 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:12.792681 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:12.792697 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:12.878341 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:12.878386 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:12.926986 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:12.927028 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:12.982133 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:12.982172 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:12.999460 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:12.999503 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:13.087892 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:13.336050 1075160 pod_ready.go:103] pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"False"
	I0127 15:41:15.338501 1075160 pod_ready.go:93] pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:41:15.338533 1075160 pod_ready.go:82] duration metric: took 9.009294324s for pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.338546 1075160 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.343866 1075160 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:41:15.343889 1075160 pod_ready.go:82] duration metric: took 5.336104ms for pod "kube-apiserver-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.343898 1075160 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.349389 1075160 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:41:15.349413 1075160 pod_ready.go:82] duration metric: took 5.508752ms for pod "kube-controller-manager-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.349422 1075160 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.355144 1075160 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:41:15.355166 1075160 pod_ready.go:82] duration metric: took 5.737289ms for pod "kube-scheduler-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.355173 1075160 pod_ready.go:39] duration metric: took 9.049686447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:41:15.355191 1075160 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:41:15.355243 1075160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:15.370942 1075160 api_server.go:72] duration metric: took 9.403809848s to wait for apiserver process to appear ...
	I0127 15:41:15.370967 1075160 api_server.go:88] waiting for apiserver healthz status ...
	I0127 15:41:15.370986 1075160 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8444/healthz ...
	I0127 15:41:15.378733 1075160 api_server.go:279] https://192.168.39.160:8444/healthz returned 200:
	ok
	I0127 15:41:15.380614 1075160 api_server.go:141] control plane version: v1.32.1
	I0127 15:41:15.380640 1075160 api_server.go:131] duration metric: took 9.666454ms to wait for apiserver health ...
	I0127 15:41:15.380649 1075160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 15:41:15.390107 1075160 system_pods.go:59] 9 kube-system pods found
	I0127 15:41:15.390141 1075160 system_pods.go:61] "coredns-668d6bf9bc-8rzrt" [92e346ae-cc28-4f80-9424-c4d97ac8106c] Running
	I0127 15:41:15.390147 1075160 system_pods.go:61] "coredns-668d6bf9bc-zw9rm" [c29a853d-5146-4641-a434-d85147dc3b16] Running
	I0127 15:41:15.390151 1075160 system_pods.go:61] "etcd-default-k8s-diff-port-912913" [4eb15463-b135-4347-9c0b-ff5cd9fa0991] Running
	I0127 15:41:15.390155 1075160 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-912913" [f1d151d9-bd66-41f1-b2e8-bb495f8a3522] Running
	I0127 15:41:15.390159 1075160 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-912913" [da81a47f-a89e-4daa-828c-e1dc1458067c] Running
	I0127 15:41:15.390161 1075160 system_pods.go:61] "kube-proxy-k85rn" [8da8dc48-3019-4fa6-b5c4-58b0b41aefc0] Running
	I0127 15:41:15.390165 1075160 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-912913" [9042c262-515d-40d9-9d99-fda8f49b141a] Running
	I0127 15:41:15.390170 1075160 system_pods.go:61] "metrics-server-f79f97bbb-rtx6b" [aed61473-0cc8-4459-9153-5c42e5a10b2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 15:41:15.390174 1075160 system_pods.go:61] "storage-provisioner" [5fa7b229-cd7d-4aa4-9cee-26a1c5714b3c] Running
	I0127 15:41:15.390184 1075160 system_pods.go:74] duration metric: took 9.526361ms to wait for pod list to return data ...
	I0127 15:41:15.390193 1075160 default_sa.go:34] waiting for default service account to be created ...
	I0127 15:41:15.394345 1075160 default_sa.go:45] found service account: "default"
	I0127 15:41:15.394371 1075160 default_sa.go:55] duration metric: took 4.169137ms for default service account to be created ...
	I0127 15:41:15.394380 1075160 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 15:41:15.537654 1075160 system_pods.go:87] 9 kube-system pods found
	I0127 15:41:15.589166 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:15.607749 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:15.607824 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:15.655722 1076050 cri.go:89] found id: ""
	I0127 15:41:15.655752 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.655764 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:15.655773 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:15.655847 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:15.703202 1076050 cri.go:89] found id: ""
	I0127 15:41:15.703235 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.703248 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:15.703256 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:15.703360 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:15.747335 1076050 cri.go:89] found id: ""
	I0127 15:41:15.747371 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.747383 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:15.747400 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:15.747470 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:15.786207 1076050 cri.go:89] found id: ""
	I0127 15:41:15.786245 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.786259 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:15.786269 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:15.786351 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:15.826251 1076050 cri.go:89] found id: ""
	I0127 15:41:15.826286 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.826298 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:15.826306 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:15.826435 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:15.873134 1076050 cri.go:89] found id: ""
	I0127 15:41:15.873167 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.873187 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:15.873195 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:15.873267 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:15.923221 1076050 cri.go:89] found id: ""
	I0127 15:41:15.923273 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.923286 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:15.923294 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:15.923364 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:15.967245 1076050 cri.go:89] found id: ""
	I0127 15:41:15.967282 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.967295 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:15.967309 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:15.967325 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:16.057675 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:16.057706 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:16.057722 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:16.141133 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:16.141181 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:16.186832 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:16.186869 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:16.255430 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:16.255473 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:18.774206 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:18.792191 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:18.792258 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:18.835636 1076050 cri.go:89] found id: ""
	I0127 15:41:18.835674 1076050 logs.go:282] 0 containers: []
	W0127 15:41:18.835685 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:18.835693 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:18.835763 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:18.875370 1076050 cri.go:89] found id: ""
	I0127 15:41:18.875423 1076050 logs.go:282] 0 containers: []
	W0127 15:41:18.875435 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:18.875444 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:18.875517 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:18.915439 1076050 cri.go:89] found id: ""
	I0127 15:41:18.915469 1076050 logs.go:282] 0 containers: []
	W0127 15:41:18.915480 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:18.915489 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:18.915554 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:18.962331 1076050 cri.go:89] found id: ""
	I0127 15:41:18.962359 1076050 logs.go:282] 0 containers: []
	W0127 15:41:18.962366 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:18.962372 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:18.962425 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:19.017809 1076050 cri.go:89] found id: ""
	I0127 15:41:19.017839 1076050 logs.go:282] 0 containers: []
	W0127 15:41:19.017849 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:19.017857 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:19.017924 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:19.066418 1076050 cri.go:89] found id: ""
	I0127 15:41:19.066454 1076050 logs.go:282] 0 containers: []
	W0127 15:41:19.066463 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:19.066469 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:19.066540 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:19.107181 1076050 cri.go:89] found id: ""
	I0127 15:41:19.107212 1076050 logs.go:282] 0 containers: []
	W0127 15:41:19.107221 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:19.107227 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:19.107286 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:19.148999 1076050 cri.go:89] found id: ""
	I0127 15:41:19.149043 1076050 logs.go:282] 0 containers: []
	W0127 15:41:19.149055 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:19.149070 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:19.149093 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:19.235472 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:19.235514 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:19.290762 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:19.290794 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:19.349155 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:19.349201 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:19.365924 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:19.365957 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:19.455480 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:21.957147 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:21.971580 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:21.971732 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:22.011493 1076050 cri.go:89] found id: ""
	I0127 15:41:22.011523 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.011531 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:22.011537 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:22.011600 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:22.047592 1076050 cri.go:89] found id: ""
	I0127 15:41:22.047615 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.047623 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:22.047635 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:22.047704 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:22.084231 1076050 cri.go:89] found id: ""
	I0127 15:41:22.084258 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.084266 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:22.084272 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:22.084331 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:22.126843 1076050 cri.go:89] found id: ""
	I0127 15:41:22.126870 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.126881 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:22.126890 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:22.126952 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:22.167538 1076050 cri.go:89] found id: ""
	I0127 15:41:22.167563 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.167572 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:22.167579 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:22.167633 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:22.206138 1076050 cri.go:89] found id: ""
	I0127 15:41:22.206169 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.206180 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:22.206193 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:22.206259 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:22.245152 1076050 cri.go:89] found id: ""
	I0127 15:41:22.245186 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.245199 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:22.245207 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:22.245273 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:22.280780 1076050 cri.go:89] found id: ""
	I0127 15:41:22.280820 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.280831 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:22.280844 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:22.280859 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:22.333940 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:22.333975 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:22.348880 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:22.348910 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:22.421581 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:22.421610 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:22.421625 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:22.502157 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:22.502199 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:25.045123 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:25.058997 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:25.059058 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:25.094852 1076050 cri.go:89] found id: ""
	I0127 15:41:25.094881 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.094888 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:25.094896 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:25.094955 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:25.136390 1076050 cri.go:89] found id: ""
	I0127 15:41:25.136414 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.136424 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:25.136432 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:25.136491 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:25.173187 1076050 cri.go:89] found id: ""
	I0127 15:41:25.173213 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.173221 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:25.173226 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:25.173284 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:25.210946 1076050 cri.go:89] found id: ""
	I0127 15:41:25.210977 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.210990 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:25.210999 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:25.211082 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:25.251607 1076050 cri.go:89] found id: ""
	I0127 15:41:25.251633 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.251643 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:25.251649 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:25.251702 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:25.286803 1076050 cri.go:89] found id: ""
	I0127 15:41:25.286831 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.286842 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:25.286849 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:25.286914 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:25.322818 1076050 cri.go:89] found id: ""
	I0127 15:41:25.322846 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.322857 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:25.322866 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:25.322936 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:25.361082 1076050 cri.go:89] found id: ""
	I0127 15:41:25.361110 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.361120 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:25.361130 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:25.361142 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:25.412378 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:25.412416 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:25.427170 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:25.427206 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:25.498342 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:25.498377 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:25.498393 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:25.589099 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:25.589152 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:28.130224 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:28.145326 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:28.145389 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:28.186258 1076050 cri.go:89] found id: ""
	I0127 15:41:28.186293 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.186316 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:28.186326 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:28.186408 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:28.224332 1076050 cri.go:89] found id: ""
	I0127 15:41:28.224370 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.224382 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:28.224393 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:28.224462 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:28.262236 1076050 cri.go:89] found id: ""
	I0127 15:41:28.262267 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.262274 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:28.262282 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:28.262334 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:28.299248 1076050 cri.go:89] found id: ""
	I0127 15:41:28.299281 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.299290 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:28.299300 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:28.299358 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:28.340255 1076050 cri.go:89] found id: ""
	I0127 15:41:28.340289 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.340301 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:28.340326 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:28.340396 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:28.384857 1076050 cri.go:89] found id: ""
	I0127 15:41:28.384891 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.384903 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:28.384912 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:28.384983 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:28.428121 1076050 cri.go:89] found id: ""
	I0127 15:41:28.428158 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.428169 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:28.428179 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:28.428248 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:28.473305 1076050 cri.go:89] found id: ""
	I0127 15:41:28.473332 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.473340 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:28.473350 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:28.473368 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:28.571238 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:28.571271 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:28.571316 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:28.651696 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:28.651731 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:28.692842 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:28.692870 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:28.748091 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:28.748133 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:31.262275 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:31.278085 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:31.278174 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:31.313339 1076050 cri.go:89] found id: ""
	I0127 15:41:31.313366 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.313375 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:31.313381 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:31.313450 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:31.351690 1076050 cri.go:89] found id: ""
	I0127 15:41:31.351716 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.351726 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:31.351732 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:31.351797 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:31.387516 1076050 cri.go:89] found id: ""
	I0127 15:41:31.387547 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.387556 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:31.387562 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:31.387617 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:31.422030 1076050 cri.go:89] found id: ""
	I0127 15:41:31.422062 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.422070 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:31.422076 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:31.422134 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:31.458563 1076050 cri.go:89] found id: ""
	I0127 15:41:31.458592 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.458604 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:31.458612 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:31.458679 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:31.496029 1076050 cri.go:89] found id: ""
	I0127 15:41:31.496064 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.496075 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:31.496090 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:31.496156 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:31.543782 1076050 cri.go:89] found id: ""
	I0127 15:41:31.543808 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.543816 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:31.543822 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:31.543874 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:31.581950 1076050 cri.go:89] found id: ""
	I0127 15:41:31.581987 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.582001 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:31.582014 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:31.582032 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:31.653329 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:31.653358 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:31.653374 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:31.736286 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:31.736323 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:31.782977 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:31.783009 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:31.842741 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:31.842773 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:34.357158 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:34.370137 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:34.370204 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:34.414297 1076050 cri.go:89] found id: ""
	I0127 15:41:34.414334 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.414347 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:34.414356 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:34.414437 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:34.450717 1076050 cri.go:89] found id: ""
	I0127 15:41:34.450749 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.450759 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:34.450767 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:34.450832 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:34.490881 1076050 cri.go:89] found id: ""
	I0127 15:41:34.490915 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.490928 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:34.490937 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:34.491012 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:34.526240 1076050 cri.go:89] found id: ""
	I0127 15:41:34.526277 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.526289 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:34.526297 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:34.526365 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:34.562664 1076050 cri.go:89] found id: ""
	I0127 15:41:34.562700 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.562712 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:34.562721 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:34.562788 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:34.600382 1076050 cri.go:89] found id: ""
	I0127 15:41:34.600411 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.600422 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:34.600430 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:34.600496 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:34.636399 1076050 cri.go:89] found id: ""
	I0127 15:41:34.636431 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.636443 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:34.636451 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:34.636518 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:34.676900 1076050 cri.go:89] found id: ""
	I0127 15:41:34.676935 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.676948 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:34.676961 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:34.676975 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:34.730519 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:34.730555 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:34.746159 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:34.746188 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:34.823410 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:34.823447 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:34.823468 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:34.907572 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:34.907611 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:37.485412 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:37.499659 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:37.499761 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:37.536578 1076050 cri.go:89] found id: ""
	I0127 15:41:37.536608 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.536618 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:37.536627 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:37.536703 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:37.573737 1076050 cri.go:89] found id: ""
	I0127 15:41:37.573773 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.573783 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:37.573790 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:37.573861 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:37.611200 1076050 cri.go:89] found id: ""
	I0127 15:41:37.611232 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.611241 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:37.611248 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:37.611302 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:37.646784 1076050 cri.go:89] found id: ""
	I0127 15:41:37.646812 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.646823 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:37.646832 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:37.646900 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:37.684664 1076050 cri.go:89] found id: ""
	I0127 15:41:37.684694 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.684706 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:37.684714 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:37.684777 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:37.721812 1076050 cri.go:89] found id: ""
	I0127 15:41:37.721850 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.721863 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:37.721874 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:37.721944 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:37.759256 1076050 cri.go:89] found id: ""
	I0127 15:41:37.759279 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.759287 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:37.759293 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:37.759345 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:37.798971 1076050 cri.go:89] found id: ""
	I0127 15:41:37.799004 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.799017 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:37.799030 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:37.799041 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:37.855679 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:37.855719 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:37.869799 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:37.869833 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:37.943918 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:37.943944 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:37.943956 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:38.035563 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:38.035611 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:40.581178 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:40.597341 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:40.597409 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:40.634799 1076050 cri.go:89] found id: ""
	I0127 15:41:40.634827 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.634836 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:40.634843 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:40.634910 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:40.684392 1076050 cri.go:89] found id: ""
	I0127 15:41:40.684421 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.684429 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:40.684437 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:40.684504 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:40.729085 1076050 cri.go:89] found id: ""
	I0127 15:41:40.729120 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.729131 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:40.729139 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:40.729212 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:40.778437 1076050 cri.go:89] found id: ""
	I0127 15:41:40.778469 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.778482 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:40.778489 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:40.778556 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:40.820889 1076050 cri.go:89] found id: ""
	I0127 15:41:40.820914 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.820922 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:40.820928 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:40.820992 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:40.858256 1076050 cri.go:89] found id: ""
	I0127 15:41:40.858284 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.858296 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:40.858304 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:40.858374 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:40.897931 1076050 cri.go:89] found id: ""
	I0127 15:41:40.897957 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.897966 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:40.897972 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:40.898026 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:40.937068 1076050 cri.go:89] found id: ""
	I0127 15:41:40.937100 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.937111 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:40.937124 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:40.937138 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:41.012844 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:41.012867 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:41.012880 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:41.093680 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:41.093722 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:41.136964 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:41.136996 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:41.190396 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:41.190435 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:43.708328 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:43.722838 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:43.722928 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:43.762360 1076050 cri.go:89] found id: ""
	I0127 15:41:43.762395 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.762407 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:43.762416 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:43.762483 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:43.802226 1076050 cri.go:89] found id: ""
	I0127 15:41:43.802266 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.802279 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:43.802287 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:43.802363 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:43.848037 1076050 cri.go:89] found id: ""
	I0127 15:41:43.848067 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.848081 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:43.848100 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:43.848167 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:43.891393 1076050 cri.go:89] found id: ""
	I0127 15:41:43.891491 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.891506 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:43.891516 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:43.891585 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:43.936352 1076050 cri.go:89] found id: ""
	I0127 15:41:43.936447 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.936467 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:43.936481 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:43.936632 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:43.980165 1076050 cri.go:89] found id: ""
	I0127 15:41:43.980192 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.980200 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:43.980206 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:43.980264 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:44.019889 1076050 cri.go:89] found id: ""
	I0127 15:41:44.019925 1076050 logs.go:282] 0 containers: []
	W0127 15:41:44.019938 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:44.019946 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:44.020005 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:44.057363 1076050 cri.go:89] found id: ""
	I0127 15:41:44.057400 1076050 logs.go:282] 0 containers: []
	W0127 15:41:44.057412 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:44.057426 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:44.057442 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:44.072218 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:44.072249 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:44.148918 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:44.148944 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:44.148960 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:44.231300 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:44.231347 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:44.273468 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:44.273507 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:46.833142 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:46.848106 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:46.848174 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:46.886223 1076050 cri.go:89] found id: ""
	I0127 15:41:46.886250 1076050 logs.go:282] 0 containers: []
	W0127 15:41:46.886258 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:46.886264 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:46.886315 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:46.923854 1076050 cri.go:89] found id: ""
	I0127 15:41:46.923883 1076050 logs.go:282] 0 containers: []
	W0127 15:41:46.923891 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:46.923903 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:46.923956 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:46.962084 1076050 cri.go:89] found id: ""
	I0127 15:41:46.962112 1076050 logs.go:282] 0 containers: []
	W0127 15:41:46.962120 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:46.962128 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:46.962189 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:46.998299 1076050 cri.go:89] found id: ""
	I0127 15:41:46.998329 1076050 logs.go:282] 0 containers: []
	W0127 15:41:46.998338 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:46.998344 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:46.998401 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:47.036481 1076050 cri.go:89] found id: ""
	I0127 15:41:47.036519 1076050 logs.go:282] 0 containers: []
	W0127 15:41:47.036531 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:47.036540 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:47.036606 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:47.072486 1076050 cri.go:89] found id: ""
	I0127 15:41:47.072522 1076050 logs.go:282] 0 containers: []
	W0127 15:41:47.072534 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:47.072543 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:47.072610 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:47.116871 1076050 cri.go:89] found id: ""
	I0127 15:41:47.116912 1076050 logs.go:282] 0 containers: []
	W0127 15:41:47.116937 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:47.116947 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:47.117049 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:47.157060 1076050 cri.go:89] found id: ""
	I0127 15:41:47.157092 1076050 logs.go:282] 0 containers: []
	W0127 15:41:47.157104 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:47.157118 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:47.157135 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:47.210998 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:47.211040 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:47.224898 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:47.224926 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:47.306490 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:47.306521 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:47.306540 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:47.394529 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:47.394582 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:49.942182 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:49.958258 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:49.958321 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:49.997962 1076050 cri.go:89] found id: ""
	I0127 15:41:49.997999 1076050 logs.go:282] 0 containers: []
	W0127 15:41:49.998019 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:49.998029 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:49.998091 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:50.042973 1076050 cri.go:89] found id: ""
	I0127 15:41:50.043007 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.043015 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:50.043021 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:50.043078 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:50.080466 1076050 cri.go:89] found id: ""
	I0127 15:41:50.080496 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.080506 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:50.080514 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:50.080581 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:50.122155 1076050 cri.go:89] found id: ""
	I0127 15:41:50.122187 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.122199 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:50.122208 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:50.122270 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:50.160215 1076050 cri.go:89] found id: ""
	I0127 15:41:50.160245 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.160254 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:50.160262 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:50.160315 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:50.200684 1076050 cri.go:89] found id: ""
	I0127 15:41:50.200710 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.200719 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:50.200724 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:50.200790 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:50.238625 1076050 cri.go:89] found id: ""
	I0127 15:41:50.238650 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.238658 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:50.238664 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:50.238721 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:50.276187 1076050 cri.go:89] found id: ""
	I0127 15:41:50.276217 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.276227 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:50.276238 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:50.276258 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:50.327617 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:50.327675 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:50.343530 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:50.343561 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:50.420740 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:50.420764 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:50.420776 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:50.506757 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:50.506809 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:53.057745 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:53.073259 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:53.073338 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:53.111798 1076050 cri.go:89] found id: ""
	I0127 15:41:53.111831 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.111839 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:53.111849 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:53.111921 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:53.151928 1076050 cri.go:89] found id: ""
	I0127 15:41:53.151959 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.151970 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:53.151978 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:53.152045 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:53.187310 1076050 cri.go:89] found id: ""
	I0127 15:41:53.187357 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.187369 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:53.187377 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:53.187443 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:53.230758 1076050 cri.go:89] found id: ""
	I0127 15:41:53.230786 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.230795 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:53.230800 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:53.230852 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:53.266244 1076050 cri.go:89] found id: ""
	I0127 15:41:53.266276 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.266285 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:53.266291 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:53.266356 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:53.302601 1076050 cri.go:89] found id: ""
	I0127 15:41:53.302628 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.302638 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:53.302647 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:53.302710 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:53.342505 1076050 cri.go:89] found id: ""
	I0127 15:41:53.342541 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.342551 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:53.342561 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:53.342643 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:53.379672 1076050 cri.go:89] found id: ""
	I0127 15:41:53.379706 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.379718 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:53.379730 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:53.379745 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:53.421809 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:53.421852 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:53.475330 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:53.475369 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:53.490625 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:53.490652 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:53.560602 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:53.560627 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:53.560637 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:56.148600 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:56.162485 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:56.162564 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:56.200397 1076050 cri.go:89] found id: ""
	I0127 15:41:56.200434 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.200447 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:56.200458 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:56.200523 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:56.236022 1076050 cri.go:89] found id: ""
	I0127 15:41:56.236067 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.236078 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:56.236086 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:56.236154 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:56.275920 1076050 cri.go:89] found id: ""
	I0127 15:41:56.275956 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.275966 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:56.275975 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:56.276046 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:56.312921 1076050 cri.go:89] found id: ""
	I0127 15:41:56.312953 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.312963 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:56.312971 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:56.313056 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:56.352348 1076050 cri.go:89] found id: ""
	I0127 15:41:56.352373 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.352381 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:56.352387 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:56.352440 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:56.398556 1076050 cri.go:89] found id: ""
	I0127 15:41:56.398591 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.398603 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:56.398617 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:56.398686 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:56.440032 1076050 cri.go:89] found id: ""
	I0127 15:41:56.440063 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.440071 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:56.440078 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:56.440137 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:56.476249 1076050 cri.go:89] found id: ""
	I0127 15:41:56.476280 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.476291 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:56.476305 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:56.476321 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:56.530965 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:56.531017 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:56.545838 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:56.545869 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:56.618187 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:56.618245 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:56.618257 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:56.701048 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:56.701087 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:59.248508 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:59.262851 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:59.262928 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:59.300917 1076050 cri.go:89] found id: ""
	I0127 15:41:59.300947 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.300959 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:59.300967 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:59.301062 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:59.345421 1076050 cri.go:89] found id: ""
	I0127 15:41:59.345452 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.345463 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:59.345471 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:59.345568 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:59.381990 1076050 cri.go:89] found id: ""
	I0127 15:41:59.382025 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.382037 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:59.382046 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:59.382115 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:59.420410 1076050 cri.go:89] found id: ""
	I0127 15:41:59.420456 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.420466 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:59.420472 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:59.420543 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:59.461365 1076050 cri.go:89] found id: ""
	I0127 15:41:59.461391 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.461403 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:59.461412 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:59.461480 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:59.497094 1076050 cri.go:89] found id: ""
	I0127 15:41:59.497122 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.497130 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:59.497136 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:59.497201 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:59.537636 1076050 cri.go:89] found id: ""
	I0127 15:41:59.537663 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.537672 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:59.537680 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:59.537780 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:59.572954 1076050 cri.go:89] found id: ""
	I0127 15:41:59.572984 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.572993 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:59.573023 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:59.573039 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:59.660416 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:59.660457 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:59.702396 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:59.702423 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:59.758534 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:59.758583 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:59.772463 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:59.772496 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:59.849599 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:02.350500 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:02.364408 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:02.364483 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:02.400537 1076050 cri.go:89] found id: ""
	I0127 15:42:02.400574 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.400588 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:02.400596 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:02.400664 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:02.442696 1076050 cri.go:89] found id: ""
	I0127 15:42:02.442731 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.442743 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:02.442751 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:02.442825 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:02.485485 1076050 cri.go:89] found id: ""
	I0127 15:42:02.485511 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.485522 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:02.485529 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:02.485595 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:02.524989 1076050 cri.go:89] found id: ""
	I0127 15:42:02.525036 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.525048 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:02.525057 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:02.525137 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:02.560538 1076050 cri.go:89] found id: ""
	I0127 15:42:02.560567 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.560578 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:02.560586 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:02.560649 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:02.602960 1076050 cri.go:89] found id: ""
	I0127 15:42:02.602996 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.603008 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:02.603017 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:02.603082 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:02.645389 1076050 cri.go:89] found id: ""
	I0127 15:42:02.645415 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.645425 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:02.645436 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:02.645502 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:02.689493 1076050 cri.go:89] found id: ""
	I0127 15:42:02.689526 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.689537 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:02.689549 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:02.689578 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:02.746806 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:02.746848 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:02.761212 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:02.761243 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:02.841116 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:02.841135 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:02.841147 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:02.932117 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:02.932159 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:05.477139 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:05.491255 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:05.491337 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:05.527520 1076050 cri.go:89] found id: ""
	I0127 15:42:05.527551 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.527563 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:05.527572 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:05.527639 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:05.569699 1076050 cri.go:89] found id: ""
	I0127 15:42:05.569731 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.569743 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:05.569752 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:05.569825 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:05.607615 1076050 cri.go:89] found id: ""
	I0127 15:42:05.607654 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.607667 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:05.607677 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:05.607750 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:05.644591 1076050 cri.go:89] found id: ""
	I0127 15:42:05.644622 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.644634 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:05.644642 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:05.644693 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:05.684235 1076050 cri.go:89] found id: ""
	I0127 15:42:05.684258 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.684265 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:05.684272 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:05.684327 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:05.722858 1076050 cri.go:89] found id: ""
	I0127 15:42:05.722902 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.722914 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:05.722924 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:05.722989 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:05.759028 1076050 cri.go:89] found id: ""
	I0127 15:42:05.759062 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.759074 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:05.759082 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:05.759203 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:05.799551 1076050 cri.go:89] found id: ""
	I0127 15:42:05.799580 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.799592 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:05.799608 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:05.799624 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:05.859709 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:05.859763 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:05.873857 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:05.873893 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:05.950048 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:05.950080 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:05.950097 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:06.027916 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:06.027961 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:08.576361 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:08.591092 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:08.591172 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:08.629233 1076050 cri.go:89] found id: ""
	I0127 15:42:08.629262 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.629271 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:08.629277 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:08.629330 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:08.664138 1076050 cri.go:89] found id: ""
	I0127 15:42:08.664172 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.664183 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:08.664192 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:08.664254 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:08.702076 1076050 cri.go:89] found id: ""
	I0127 15:42:08.702113 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.702124 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:08.702132 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:08.702195 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:08.738780 1076050 cri.go:89] found id: ""
	I0127 15:42:08.738813 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.738823 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:08.738831 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:08.738904 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:08.773890 1076050 cri.go:89] found id: ""
	I0127 15:42:08.773922 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.773930 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:08.773936 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:08.773987 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:08.808430 1076050 cri.go:89] found id: ""
	I0127 15:42:08.808465 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.808477 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:08.808485 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:08.808553 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:08.844590 1076050 cri.go:89] found id: ""
	I0127 15:42:08.844615 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.844626 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:08.844634 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:08.844701 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:08.888333 1076050 cri.go:89] found id: ""
	I0127 15:42:08.888368 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.888377 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:08.888388 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:08.888420 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:08.941417 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:08.941453 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:08.956868 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:08.956942 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:09.049362 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:09.049390 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:09.049406 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:09.129215 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:09.129255 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:11.675550 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:11.690737 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:11.690808 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:11.727524 1076050 cri.go:89] found id: ""
	I0127 15:42:11.727554 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.727564 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:11.727572 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:11.727635 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:11.764046 1076050 cri.go:89] found id: ""
	I0127 15:42:11.764073 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.764082 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:11.764089 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:11.764142 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:11.799530 1076050 cri.go:89] found id: ""
	I0127 15:42:11.799562 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.799574 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:11.799582 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:11.799647 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:11.839880 1076050 cri.go:89] found id: ""
	I0127 15:42:11.839912 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.839921 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:11.839927 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:11.839989 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:11.876263 1076050 cri.go:89] found id: ""
	I0127 15:42:11.876313 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.876324 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:11.876332 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:11.876403 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:11.919106 1076050 cri.go:89] found id: ""
	I0127 15:42:11.919136 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.919144 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:11.919150 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:11.919209 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:11.957253 1076050 cri.go:89] found id: ""
	I0127 15:42:11.957285 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.957296 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:11.957304 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:11.957369 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:11.993481 1076050 cri.go:89] found id: ""
	I0127 15:42:11.993515 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.993527 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:11.993544 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:11.993560 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:12.063236 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:12.063264 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:12.063285 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:12.149889 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:12.149932 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:12.195704 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:12.195730 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:12.254422 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:12.254457 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:14.768483 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:14.782452 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:14.782539 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:14.822523 1076050 cri.go:89] found id: ""
	I0127 15:42:14.822558 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.822570 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:14.822576 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:14.822654 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:14.861058 1076050 cri.go:89] found id: ""
	I0127 15:42:14.861085 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.861094 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:14.861099 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:14.861164 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:14.898147 1076050 cri.go:89] found id: ""
	I0127 15:42:14.898178 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.898189 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:14.898199 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:14.898265 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:14.936269 1076050 cri.go:89] found id: ""
	I0127 15:42:14.936299 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.936307 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:14.936313 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:14.936378 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:14.971287 1076050 cri.go:89] found id: ""
	I0127 15:42:14.971320 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.971332 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:14.971341 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:14.971394 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:15.011649 1076050 cri.go:89] found id: ""
	I0127 15:42:15.011679 1076050 logs.go:282] 0 containers: []
	W0127 15:42:15.011687 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:15.011693 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:15.011744 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:15.047290 1076050 cri.go:89] found id: ""
	I0127 15:42:15.047329 1076050 logs.go:282] 0 containers: []
	W0127 15:42:15.047340 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:15.047349 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:15.047413 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:15.089625 1076050 cri.go:89] found id: ""
	I0127 15:42:15.089655 1076050 logs.go:282] 0 containers: []
	W0127 15:42:15.089667 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:15.089680 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:15.089694 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:15.136374 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:15.136410 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:15.195628 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:15.195676 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:15.213575 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:15.213679 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:15.293664 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:15.293694 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:15.293707 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:17.882520 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:17.896333 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:17.896403 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:17.935049 1076050 cri.go:89] found id: ""
	I0127 15:42:17.935078 1076050 logs.go:282] 0 containers: []
	W0127 15:42:17.935088 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:17.935096 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:17.935158 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:17.972911 1076050 cri.go:89] found id: ""
	I0127 15:42:17.972946 1076050 logs.go:282] 0 containers: []
	W0127 15:42:17.972958 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:17.972967 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:17.973073 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:18.017249 1076050 cri.go:89] found id: ""
	I0127 15:42:18.017276 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.017286 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:18.017292 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:18.017353 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:18.059963 1076050 cri.go:89] found id: ""
	I0127 15:42:18.059995 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.060007 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:18.060016 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:18.060086 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:18.106174 1076050 cri.go:89] found id: ""
	I0127 15:42:18.106219 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.106232 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:18.106248 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:18.106318 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:18.146130 1076050 cri.go:89] found id: ""
	I0127 15:42:18.146161 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.146176 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:18.146184 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:18.146256 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:18.184143 1076050 cri.go:89] found id: ""
	I0127 15:42:18.184176 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.184185 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:18.184191 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:18.184246 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:18.225042 1076050 cri.go:89] found id: ""
	I0127 15:42:18.225084 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.225096 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:18.225110 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:18.225127 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:18.263543 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:18.263577 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:18.321274 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:18.321323 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:18.336830 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:18.336861 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:18.420928 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:18.420955 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:18.420971 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:21.014731 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:21.030978 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:21.031048 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:21.071340 1076050 cri.go:89] found id: ""
	I0127 15:42:21.071370 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.071378 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:21.071385 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:21.071442 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:21.107955 1076050 cri.go:89] found id: ""
	I0127 15:42:21.107987 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.107999 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:21.108006 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:21.108073 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:21.148426 1076050 cri.go:89] found id: ""
	I0127 15:42:21.148465 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.148477 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:21.148488 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:21.148561 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:21.199228 1076050 cri.go:89] found id: ""
	I0127 15:42:21.199262 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.199273 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:21.199282 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:21.199353 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:21.259122 1076050 cri.go:89] found id: ""
	I0127 15:42:21.259156 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.259167 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:21.259175 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:21.259249 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:21.316242 1076050 cri.go:89] found id: ""
	I0127 15:42:21.316288 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.316300 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:21.316309 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:21.316378 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:21.360071 1076050 cri.go:89] found id: ""
	I0127 15:42:21.360104 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.360116 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:21.360125 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:21.360190 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:21.405056 1076050 cri.go:89] found id: ""
	I0127 15:42:21.405088 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.405099 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:21.405112 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:21.405129 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:21.419657 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:21.419688 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:21.495931 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:21.495957 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:21.495973 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:21.578029 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:21.578075 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:21.626705 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:21.626742 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:24.180267 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:24.193848 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:24.193927 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:24.232734 1076050 cri.go:89] found id: ""
	I0127 15:42:24.232767 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.232778 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:24.232787 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:24.232855 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:24.274373 1076050 cri.go:89] found id: ""
	I0127 15:42:24.274410 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.274421 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:24.274430 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:24.274486 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:24.314420 1076050 cri.go:89] found id: ""
	I0127 15:42:24.314449 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.314459 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:24.314469 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:24.314533 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:24.353247 1076050 cri.go:89] found id: ""
	I0127 15:42:24.353284 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.353302 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:24.353311 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:24.353380 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:24.395518 1076050 cri.go:89] found id: ""
	I0127 15:42:24.395545 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.395556 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:24.395564 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:24.395630 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:24.433954 1076050 cri.go:89] found id: ""
	I0127 15:42:24.433988 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.433999 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:24.434008 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:24.434078 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:24.475406 1076050 cri.go:89] found id: ""
	I0127 15:42:24.475438 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.475451 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:24.475460 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:24.475530 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:24.511024 1076050 cri.go:89] found id: ""
	I0127 15:42:24.511062 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.511074 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:24.511086 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:24.511105 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:24.585723 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:24.585746 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:24.585766 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:24.666956 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:24.666997 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:24.707929 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:24.707953 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:24.761870 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:24.761906 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:27.276721 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:27.292246 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:27.292341 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:27.332682 1076050 cri.go:89] found id: ""
	I0127 15:42:27.332715 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.332725 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:27.332733 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:27.332804 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:27.368942 1076050 cri.go:89] found id: ""
	I0127 15:42:27.368975 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.368988 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:27.368997 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:27.369083 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:27.406074 1076050 cri.go:89] found id: ""
	I0127 15:42:27.406116 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.406133 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:27.406141 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:27.406195 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:27.443019 1076050 cri.go:89] found id: ""
	I0127 15:42:27.443049 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.443061 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:27.443069 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:27.443136 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:27.478322 1076050 cri.go:89] found id: ""
	I0127 15:42:27.478359 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.478370 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:27.478380 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:27.478463 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:27.517749 1076050 cri.go:89] found id: ""
	I0127 15:42:27.517781 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.517793 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:27.517802 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:27.517868 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:27.556151 1076050 cri.go:89] found id: ""
	I0127 15:42:27.556182 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.556191 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:27.556197 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:27.556260 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:27.594607 1076050 cri.go:89] found id: ""
	I0127 15:42:27.594638 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.594646 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:27.594656 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:27.594666 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:27.675142 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:27.675184 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:27.719306 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:27.719341 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:27.771036 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:27.771076 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:27.785422 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:27.785451 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:27.863147 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:30.364006 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:30.378275 1076050 kubeadm.go:597] duration metric: took 4m3.244067669s to restartPrimaryControlPlane
	W0127 15:42:30.378392 1076050 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 15:42:30.378427 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:42:32.324859 1076050 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.946405854s)
	I0127 15:42:32.324949 1076050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:42:32.342099 1076050 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:42:32.353110 1076050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:42:32.365238 1076050 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:42:32.365259 1076050 kubeadm.go:157] found existing configuration files:
	
	I0127 15:42:32.365309 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:42:32.376623 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:42:32.376679 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:42:32.387533 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:42:32.397645 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:42:32.397706 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:42:32.409015 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:42:32.420172 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:42:32.420236 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:42:32.430688 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:42:32.441797 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:42:32.441856 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:42:32.452009 1076050 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:42:32.678031 1076050 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:44:29.249145 1076050 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 15:44:29.249258 1076050 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 15:44:29.250830 1076050 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 15:44:29.250891 1076050 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:44:29.251016 1076050 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:44:29.251168 1076050 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:44:29.251317 1076050 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 15:44:29.251390 1076050 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:44:29.253163 1076050 out.go:235]   - Generating certificates and keys ...
	I0127 15:44:29.253266 1076050 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:44:29.253389 1076050 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:44:29.253470 1076050 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:44:29.253522 1076050 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:44:29.253581 1076050 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:44:29.253626 1076050 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:44:29.253704 1076050 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:44:29.253772 1076050 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:44:29.253864 1076050 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:44:29.253956 1076050 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:44:29.254008 1076050 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:44:29.254112 1076050 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:44:29.254215 1076050 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:44:29.254305 1076050 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:44:29.254391 1076050 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:44:29.254466 1076050 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:44:29.254625 1076050 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:44:29.254763 1076050 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:44:29.254826 1076050 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:44:29.254989 1076050 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:44:29.256624 1076050 out.go:235]   - Booting up control plane ...
	I0127 15:44:29.256744 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:44:29.256829 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:44:29.256905 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:44:29.257025 1076050 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:44:29.257228 1076050 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 15:44:29.257290 1076050 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 15:44:29.257373 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.257657 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.257767 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.257963 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.258031 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.258254 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.258355 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.258591 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.258669 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.258862 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.258871 1076050 kubeadm.go:310] 
	I0127 15:44:29.258904 1076050 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 15:44:29.258972 1076050 kubeadm.go:310] 		timed out waiting for the condition
	I0127 15:44:29.258989 1076050 kubeadm.go:310] 
	I0127 15:44:29.259027 1076050 kubeadm.go:310] 	This error is likely caused by:
	I0127 15:44:29.259057 1076050 kubeadm.go:310] 		- The kubelet is not running
	I0127 15:44:29.259205 1076050 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 15:44:29.259221 1076050 kubeadm.go:310] 
	I0127 15:44:29.259358 1076050 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 15:44:29.259391 1076050 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 15:44:29.259444 1076050 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 15:44:29.259459 1076050 kubeadm.go:310] 
	I0127 15:44:29.259593 1076050 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 15:44:29.259701 1076050 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 15:44:29.259710 1076050 kubeadm.go:310] 
	I0127 15:44:29.259818 1076050 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 15:44:29.259940 1076050 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 15:44:29.260041 1076050 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 15:44:29.260150 1076050 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 15:44:29.260179 1076050 kubeadm.go:310] 
	W0127 15:44:29.260362 1076050 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 15:44:29.260421 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:44:29.751111 1076050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:44:29.767368 1076050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:44:29.778471 1076050 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:44:29.778498 1076050 kubeadm.go:157] found existing configuration files:
	
	I0127 15:44:29.778554 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:44:29.789258 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:44:29.789331 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:44:29.799796 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:44:29.809761 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:44:29.809824 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:44:29.819822 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:44:29.829277 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:44:29.829350 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:44:29.840607 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:44:29.850589 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:44:29.850656 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:44:29.860352 1076050 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:44:29.931615 1076050 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 15:44:29.931737 1076050 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:44:30.090907 1076050 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:44:30.091038 1076050 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:44:30.091180 1076050 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 15:44:30.288545 1076050 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:44:30.290548 1076050 out.go:235]   - Generating certificates and keys ...
	I0127 15:44:30.290678 1076050 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:44:30.290777 1076050 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:44:30.290899 1076050 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:44:30.290993 1076050 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:44:30.291119 1076050 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:44:30.291213 1076050 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:44:30.291312 1076050 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:44:30.291399 1076050 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:44:30.291523 1076050 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:44:30.291640 1076050 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:44:30.291718 1076050 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:44:30.291806 1076050 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:44:30.471428 1076050 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:44:30.705804 1076050 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:44:30.959802 1076050 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:44:31.149201 1076050 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:44:31.173695 1076050 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:44:31.174653 1076050 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:44:31.174752 1076050 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:44:31.342124 1076050 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:44:31.344077 1076050 out.go:235]   - Booting up control plane ...
	I0127 15:44:31.344184 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:44:31.348014 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:44:31.349159 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:44:31.349960 1076050 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:44:31.352168 1076050 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 15:45:11.354910 1076050 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 15:45:11.355380 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:45:11.355582 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:45:16.356239 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:45:16.356487 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:45:26.357276 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:45:26.357605 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:45:46.358046 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:45:46.358293 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:46:26.356549 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:46:26.356813 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:46:26.356830 1076050 kubeadm.go:310] 
	I0127 15:46:26.356897 1076050 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 15:46:26.356938 1076050 kubeadm.go:310] 		timed out waiting for the condition
	I0127 15:46:26.356949 1076050 kubeadm.go:310] 
	I0127 15:46:26.357026 1076050 kubeadm.go:310] 	This error is likely caused by:
	I0127 15:46:26.357106 1076050 kubeadm.go:310] 		- The kubelet is not running
	I0127 15:46:26.357302 1076050 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 15:46:26.357336 1076050 kubeadm.go:310] 
	I0127 15:46:26.357498 1076050 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 15:46:26.357548 1076050 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 15:46:26.357607 1076050 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 15:46:26.357624 1076050 kubeadm.go:310] 
	I0127 15:46:26.357766 1076050 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 15:46:26.357862 1076050 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 15:46:26.357878 1076050 kubeadm.go:310] 
	I0127 15:46:26.358043 1076050 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 15:46:26.358166 1076050 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 15:46:26.358290 1076050 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 15:46:26.358368 1076050 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 15:46:26.358379 1076050 kubeadm.go:310] 
	I0127 15:46:26.358971 1076050 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:46:26.359102 1076050 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 15:46:26.359219 1076050 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 15:46:26.359281 1076050 kubeadm.go:394] duration metric: took 7m59.27977519s to StartCluster
	I0127 15:46:26.359443 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:46:26.359522 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:46:26.408713 1076050 cri.go:89] found id: ""
	I0127 15:46:26.408752 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.408764 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:46:26.408772 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:46:26.408832 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:46:26.449156 1076050 cri.go:89] found id: ""
	I0127 15:46:26.449190 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.449200 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:46:26.449208 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:46:26.449306 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:46:26.487786 1076050 cri.go:89] found id: ""
	I0127 15:46:26.487812 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.487820 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:46:26.487827 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:46:26.487876 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:46:26.546745 1076050 cri.go:89] found id: ""
	I0127 15:46:26.546772 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.546782 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:46:26.546791 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:46:26.546855 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:46:26.584262 1076050 cri.go:89] found id: ""
	I0127 15:46:26.584300 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.584308 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:46:26.584316 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:46:26.584385 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:46:26.622575 1076050 cri.go:89] found id: ""
	I0127 15:46:26.622608 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.622617 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:46:26.622623 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:46:26.622683 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:46:26.660928 1076050 cri.go:89] found id: ""
	I0127 15:46:26.660955 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.660964 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:46:26.660970 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:46:26.661062 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:46:26.698084 1076050 cri.go:89] found id: ""
	I0127 15:46:26.698116 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.698125 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:46:26.698139 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:46:26.698151 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:46:26.742459 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:46:26.742486 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:46:26.797935 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:46:26.797977 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:46:26.814213 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:46:26.814248 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:46:26.903335 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:46:26.903373 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:46:26.903392 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0127 15:46:27.016392 1076050 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 15:46:27.016470 1076050 out.go:270] * 
	W0127 15:46:27.016547 1076050 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 15:46:27.016561 1076050 out.go:270] * 
	W0127 15:46:27.017322 1076050 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 15:46:27.020682 1076050 out.go:201] 
	W0127 15:46:27.022217 1076050 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 15:46:27.022269 1076050 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 15:46:27.022288 1076050 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 15:46:27.023966 1076050 out.go:201] 
	
	
	==> CRI-O <==
	Jan 27 15:55:30 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:30.926314930Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993330926290695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0162efa8-61b0-44f7-8c72-368f5dd677e1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:55:30 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:30.926976338Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e86b9b0-e806-4393-8730-8482d12784de name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:55:30 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:30.927027362Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e86b9b0-e806-4393-8730-8482d12784de name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:55:30 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:30.927057836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1e86b9b0-e806-4393-8730-8482d12784de name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:55:30 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:30.963362496Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad594367-9211-49b5-a8fb-bedd6dffe7c1 name=/runtime.v1.RuntimeService/Version
	Jan 27 15:55:30 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:30.963569517Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad594367-9211-49b5-a8fb-bedd6dffe7c1 name=/runtime.v1.RuntimeService/Version
	Jan 27 15:55:30 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:30.964941544Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b1b1fa40-aff2-4788-8563-f4b71211293f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:55:30 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:30.965329370Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993330965310169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1b1fa40-aff2-4788-8563-f4b71211293f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:55:30 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:30.966027393Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0719ad39-15df-4c1a-a0e9-60be548f3485 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:55:30 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:30.966154174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0719ad39-15df-4c1a-a0e9-60be548f3485 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:55:30 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:30.966204408Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0719ad39-15df-4c1a-a0e9-60be548f3485 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:55:31 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:31.001798860Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=207514b5-ae33-44f7-b492-c84154221ae4 name=/runtime.v1.RuntimeService/Version
	Jan 27 15:55:31 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:31.001933621Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=207514b5-ae33-44f7-b492-c84154221ae4 name=/runtime.v1.RuntimeService/Version
	Jan 27 15:55:31 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:31.009670467Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=514df46b-6dda-4194-aa5a-61422d2656b2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:55:31 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:31.010161537Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993331010133432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=514df46b-6dda-4194-aa5a-61422d2656b2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:55:31 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:31.010880141Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=222fc5c0-a890-426c-9dd8-3c40c3c36ed3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:55:31 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:31.010970578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=222fc5c0-a890-426c-9dd8-3c40c3c36ed3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:55:31 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:31.011004327Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=222fc5c0-a890-426c-9dd8-3c40c3c36ed3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:55:31 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:31.049837579Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dfcd83f9-07ce-4f93-995c-9227f26e0364 name=/runtime.v1.RuntimeService/Version
	Jan 27 15:55:31 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:31.049972915Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dfcd83f9-07ce-4f93-995c-9227f26e0364 name=/runtime.v1.RuntimeService/Version
	Jan 27 15:55:31 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:31.051674789Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=07a1ade9-795e-46b3-b6a0-5ef809476424 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:55:31 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:31.052205998Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993331052179265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=07a1ade9-795e-46b3-b6a0-5ef809476424 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 15:55:31 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:31.052913697Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df60dfeb-f915-4538-a016-2c463cc9dcee name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:55:31 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:31.053005227Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df60dfeb-f915-4538-a016-2c463cc9dcee name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 15:55:31 old-k8s-version-405706 crio[634]: time="2025-01-27 15:55:31.053063118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=df60dfeb-f915-4538-a016-2c463cc9dcee name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan27 15:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054128] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043515] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.175374] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.998732] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.641220] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.061271] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.065012] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073970] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.202651] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.132479] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.248883] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +6.567266] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.063012] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.058094] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[ +12.932312] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 15:42] systemd-fstab-generator[5003]: Ignoring "noauto" option for root device
	[Jan27 15:44] systemd-fstab-generator[5276]: Ignoring "noauto" option for root device
	[  +0.074147] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 15:55:31 up 17 min,  0 users,  load average: 0.00, 0.06, 0.07
	Linux old-k8s-version-405706 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 27 15:55:27 old-k8s-version-405706 kubelet[6444]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000051a90)
	Jan 27 15:55:27 old-k8s-version-405706 kubelet[6444]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Jan 27 15:55:27 old-k8s-version-405706 kubelet[6444]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Jan 27 15:55:27 old-k8s-version-405706 kubelet[6444]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Jan 27 15:55:27 old-k8s-version-405706 kubelet[6444]: goroutine 165 [select]:
	Jan 27 15:55:27 old-k8s-version-405706 kubelet[6444]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000cdfef0, 0x4f0ac20, 0xc000051cc0, 0x1, 0xc0001000c0)
	Jan 27 15:55:27 old-k8s-version-405706 kubelet[6444]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jan 27 15:55:27 old-k8s-version-405706 kubelet[6444]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000c7c460, 0xc0001000c0)
	Jan 27 15:55:27 old-k8s-version-405706 kubelet[6444]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jan 27 15:55:27 old-k8s-version-405706 kubelet[6444]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jan 27 15:55:27 old-k8s-version-405706 kubelet[6444]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jan 27 15:55:27 old-k8s-version-405706 kubelet[6444]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000cca6e0, 0xc000c67300)
	Jan 27 15:55:27 old-k8s-version-405706 kubelet[6444]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jan 27 15:55:27 old-k8s-version-405706 kubelet[6444]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jan 27 15:55:27 old-k8s-version-405706 kubelet[6444]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jan 27 15:55:27 old-k8s-version-405706 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 27 15:55:27 old-k8s-version-405706 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 27 15:55:28 old-k8s-version-405706 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jan 27 15:55:28 old-k8s-version-405706 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 27 15:55:28 old-k8s-version-405706 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 27 15:55:28 old-k8s-version-405706 kubelet[6453]: I0127 15:55:28.483963    6453 server.go:416] Version: v1.20.0
	Jan 27 15:55:28 old-k8s-version-405706 kubelet[6453]: I0127 15:55:28.484300    6453 server.go:837] Client rotation is on, will bootstrap in background
	Jan 27 15:55:28 old-k8s-version-405706 kubelet[6453]: I0127 15:55:28.486270    6453 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 27 15:55:28 old-k8s-version-405706 kubelet[6453]: W0127 15:55:28.487931    6453 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 27 15:55:28 old-k8s-version-405706 kubelet[6453]: I0127 15:55:28.487981    6453 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405706 -n old-k8s-version-405706
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405706 -n old-k8s-version-405706: exit status 2 (257.974954ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-405706" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (341.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:55:38.928314 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:56:08.726365 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:56:46.261759 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:57:07.512711 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:57:45.220638 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:58:16.947124 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:59:06.238591 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 15:59:32.465243 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 16:00:16.985080 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/calico-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 16:00:38.927707 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/custom-flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0127 16:01:08.725767 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/enable-default-cni-230388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405706 -n old-k8s-version-405706
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405706 -n old-k8s-version-405706: exit status 2 (254.198777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-405706" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-405706 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-405706 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.941µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-405706 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405706 -n old-k8s-version-405706
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405706 -n old-k8s-version-405706: exit status 2 (247.001429ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-405706 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-405706 logs -n 25: (1.139144496s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-230388 sudo cat                              | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-230388 sudo                                  | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-230388 sudo                                  | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-230388 sudo                                  | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-230388 sudo find                             | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-230388 sudo crio                             | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-230388                                       | bridge-230388                | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-147179 | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:32 UTC |
	|         | disable-driver-mounts-147179                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:32 UTC | 27 Jan 25 15:33 UTC |
	|         | default-k8s-diff-port-912913                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-458006             | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-458006                                   | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-349782            | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-349782                                  | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:33 UTC | 27 Jan 25 15:35 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-912913  | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC | 27 Jan 25 15:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC | 27 Jan 25 15:35 UTC |
	|         | default-k8s-diff-port-912913                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-458006                  | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC | 27 Jan 25 15:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-458006                                   | no-preload-458006            | jenkins | v1.35.0 | 27 Jan 25 15:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-349782                 | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC | 27 Jan 25 15:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-349782                                  | embed-certs-349782           | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-912913       | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC | 27 Jan 25 15:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-912913 | jenkins | v1.35.0 | 27 Jan 25 15:35 UTC |                     |
	|         | default-k8s-diff-port-912913                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-405706        | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:36 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-405706                              | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:37 UTC | 27 Jan 25 15:37 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-405706             | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:37 UTC | 27 Jan 25 15:37 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-405706                              | old-k8s-version-405706       | jenkins | v1.35.0 | 27 Jan 25 15:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 15:37:58
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 15:37:58.460225 1076050 out.go:345] Setting OutFile to fd 1 ...
	I0127 15:37:58.460642 1076050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:37:58.460654 1076050 out.go:358] Setting ErrFile to fd 2...
	I0127 15:37:58.460661 1076050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:37:58.461077 1076050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 15:37:58.462086 1076050 out.go:352] Setting JSON to false
	I0127 15:37:58.463486 1076050 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":22825,"bootTime":1737969453,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 15:37:58.463630 1076050 start.go:139] virtualization: kvm guest
	I0127 15:37:58.465774 1076050 out.go:177] * [old-k8s-version-405706] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 15:37:58.467019 1076050 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 15:37:58.467027 1076050 notify.go:220] Checking for updates...
	I0127 15:37:58.469366 1076050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 15:37:58.470862 1076050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:37:58.472239 1076050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:37:58.473602 1076050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 15:37:58.474992 1076050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 15:37:58.477098 1076050 config.go:182] Loaded profile config "old-k8s-version-405706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 15:37:58.477731 1076050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:37:58.477799 1076050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:37:58.494965 1076050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44031
	I0127 15:37:58.495385 1076050 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:37:58.495879 1076050 main.go:141] libmachine: Using API Version  1
	I0127 15:37:58.495901 1076050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:37:58.496287 1076050 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:37:58.496581 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:37:58.498539 1076050 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 15:37:58.499766 1076050 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 15:37:58.500092 1076050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:37:58.500132 1076050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:37:58.516530 1076050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38979
	I0127 15:37:58.517083 1076050 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:37:58.517634 1076050 main.go:141] libmachine: Using API Version  1
	I0127 15:37:58.517666 1076050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:37:58.518105 1076050 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:37:58.518356 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:37:58.558744 1076050 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 15:37:58.560294 1076050 start.go:297] selected driver: kvm2
	I0127 15:37:58.560309 1076050 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-4
05706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:37:58.560451 1076050 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 15:37:58.561175 1076050 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:37:58.561284 1076050 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 15:37:58.579056 1076050 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 15:37:58.579656 1076050 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 15:37:58.579710 1076050 cni.go:84] Creating CNI manager for ""
	I0127 15:37:58.579776 1076050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:37:58.579842 1076050 start.go:340] cluster config:
	{Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:37:58.580020 1076050 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 15:37:58.581716 1076050 out.go:177] * Starting "old-k8s-version-405706" primary control-plane node in "old-k8s-version-405706" cluster
	I0127 15:37:58.582897 1076050 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 15:37:58.582967 1076050 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 15:37:58.582980 1076050 cache.go:56] Caching tarball of preloaded images
	I0127 15:37:58.583091 1076050 preload.go:172] Found /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 15:37:58.583107 1076050 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 15:37:58.583235 1076050 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/config.json ...
	I0127 15:37:58.583561 1076050 start.go:360] acquireMachinesLock for old-k8s-version-405706: {Name:mk884e36253ca066a698970989f20649e5f9cbef Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 15:37:58.583628 1076050 start.go:364] duration metric: took 38.743µs to acquireMachinesLock for "old-k8s-version-405706"
	I0127 15:37:58.583652 1076050 start.go:96] Skipping create...Using existing machine configuration
	I0127 15:37:58.583664 1076050 fix.go:54] fixHost starting: 
	I0127 15:37:58.584041 1076050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:37:58.584088 1076050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:37:58.599995 1076050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I0127 15:37:58.600476 1076050 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:37:58.600955 1076050 main.go:141] libmachine: Using API Version  1
	I0127 15:37:58.600978 1076050 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:37:58.601364 1076050 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:37:58.601600 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:37:58.601761 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetState
	I0127 15:37:58.603539 1076050 fix.go:112] recreateIfNeeded on old-k8s-version-405706: state=Stopped err=<nil>
	I0127 15:37:58.603586 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	W0127 15:37:58.603763 1076050 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 15:37:58.606243 1076050 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-405706" ...
	I0127 15:37:54.081369 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:56.581569 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:58.582848 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:59.787393 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:01.789117 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:58.529695 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:01.029818 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:37:58.607570 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .Start
	I0127 15:37:58.607751 1076050 main.go:141] libmachine: (old-k8s-version-405706) starting domain...
	I0127 15:37:58.607775 1076050 main.go:141] libmachine: (old-k8s-version-405706) ensuring networks are active...
	I0127 15:37:58.608545 1076050 main.go:141] libmachine: (old-k8s-version-405706) Ensuring network default is active
	I0127 15:37:58.608940 1076050 main.go:141] libmachine: (old-k8s-version-405706) Ensuring network mk-old-k8s-version-405706 is active
	I0127 15:37:58.609360 1076050 main.go:141] libmachine: (old-k8s-version-405706) getting domain XML...
	I0127 15:37:58.610094 1076050 main.go:141] libmachine: (old-k8s-version-405706) creating domain...
	I0127 15:37:59.916140 1076050 main.go:141] libmachine: (old-k8s-version-405706) waiting for IP...
	I0127 15:37:59.917074 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:37:59.917644 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:37:59.917771 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:37:59.917639 1076085 retry.go:31] will retry after 260.191068ms: waiting for domain to come up
	I0127 15:38:00.180221 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:00.180922 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:00.180948 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:00.180879 1076085 retry.go:31] will retry after 359.566395ms: waiting for domain to come up
	I0127 15:38:00.542429 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:00.543056 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:00.543097 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:00.542942 1076085 retry.go:31] will retry after 454.555688ms: waiting for domain to come up
	I0127 15:38:00.999387 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:00.999926 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:00.999963 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:00.999888 1076085 retry.go:31] will retry after 559.246215ms: waiting for domain to come up
	I0127 15:38:01.560836 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:01.561528 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:01.561554 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:01.561489 1076085 retry.go:31] will retry after 552.626147ms: waiting for domain to come up
	I0127 15:38:02.116418 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:02.116873 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:02.116914 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:02.116852 1076085 retry.go:31] will retry after 808.293412ms: waiting for domain to come up
	I0127 15:38:02.927177 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:02.927742 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:02.927794 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:02.927707 1076085 retry.go:31] will retry after 740.958034ms: waiting for domain to come up
	I0127 15:38:00.583568 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:03.081418 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:04.290371 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:06.787711 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:03.529199 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:05.530455 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:03.670221 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:03.670746 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:03.670778 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:03.670698 1076085 retry.go:31] will retry after 1.365040284s: waiting for domain to come up
	I0127 15:38:05.038371 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:05.039049 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:05.039084 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:05.039001 1076085 retry.go:31] will retry after 1.410803026s: waiting for domain to come up
	I0127 15:38:06.451661 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:06.452329 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:06.452353 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:06.452303 1076085 retry.go:31] will retry after 1.899894945s: waiting for domain to come up
	I0127 15:38:08.354209 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:08.354816 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:08.354843 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:08.354774 1076085 retry.go:31] will retry after 2.020609979s: waiting for domain to come up
	I0127 15:38:05.581452 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:07.587869 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:08.788730 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:11.289383 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:07.534482 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:10.029370 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:10.377713 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:10.378246 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:10.378288 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:10.378203 1076085 retry.go:31] will retry after 2.469378968s: waiting for domain to come up
	I0127 15:38:12.850116 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:12.850624 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | unable to find current IP address of domain old-k8s-version-405706 in network mk-old-k8s-version-405706
	I0127 15:38:12.850678 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | I0127 15:38:12.850598 1076085 retry.go:31] will retry after 4.322374162s: waiting for domain to come up
	I0127 15:38:10.085186 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:12.580963 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:13.788914 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:16.287163 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:12.528917 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:14.531412 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:17.028589 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:17.175528 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.176129 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has current primary IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.176161 1076050 main.go:141] libmachine: (old-k8s-version-405706) found domain IP: 192.168.72.49
	I0127 15:38:17.176174 1076050 main.go:141] libmachine: (old-k8s-version-405706) reserving static IP address...
	I0127 15:38:17.176643 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "old-k8s-version-405706", mac: "52:54:00:c3:d6:50", ip: "192.168.72.49"} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.176678 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | skip adding static IP to network mk-old-k8s-version-405706 - found existing host DHCP lease matching {name: "old-k8s-version-405706", mac: "52:54:00:c3:d6:50", ip: "192.168.72.49"}
	I0127 15:38:17.176696 1076050 main.go:141] libmachine: (old-k8s-version-405706) reserved static IP address 192.168.72.49 for domain old-k8s-version-405706
	I0127 15:38:17.176711 1076050 main.go:141] libmachine: (old-k8s-version-405706) waiting for SSH...
	I0127 15:38:17.176725 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | Getting to WaitForSSH function...
	I0127 15:38:17.179302 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.179688 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.179730 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.179875 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | Using SSH client type: external
	I0127 15:38:17.179902 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | Using SSH private key: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa (-rw-------)
	I0127 15:38:17.179949 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 15:38:17.179964 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | About to run SSH command:
	I0127 15:38:17.179977 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | exit 0
	I0127 15:38:17.309257 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | SSH cmd err, output: <nil>: 
	I0127 15:38:17.309663 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetConfigRaw
	I0127 15:38:17.310369 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:38:17.313129 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.313573 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.313604 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.313898 1076050 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/config.json ...
	I0127 15:38:17.314149 1076050 machine.go:93] provisionDockerMachine start ...
	I0127 15:38:17.314178 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:17.314424 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.317176 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.317563 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.317591 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.317822 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:17.318108 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.318299 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.318460 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:17.318635 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:17.318853 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:17.318864 1076050 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 15:38:17.433866 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 15:38:17.433903 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetMachineName
	I0127 15:38:17.434143 1076050 buildroot.go:166] provisioning hostname "old-k8s-version-405706"
	I0127 15:38:17.434203 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetMachineName
	I0127 15:38:17.434415 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.437023 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.437426 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.437473 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.437592 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:17.437754 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.437908 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.438061 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:17.438217 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:17.438406 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:17.438418 1076050 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-405706 && echo "old-k8s-version-405706" | sudo tee /etc/hostname
	I0127 15:38:17.569398 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-405706
	
	I0127 15:38:17.569429 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.572466 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.572839 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.572882 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.573066 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:17.573312 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.573557 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.573726 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:17.573924 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:17.574106 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:17.574123 1076050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-405706' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-405706/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-405706' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 15:38:17.705253 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 15:38:17.705300 1076050 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20321-1005652/.minikube CaCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20321-1005652/.minikube}
	I0127 15:38:17.705320 1076050 buildroot.go:174] setting up certificates
	I0127 15:38:17.705333 1076050 provision.go:84] configureAuth start
	I0127 15:38:17.705346 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetMachineName
	I0127 15:38:17.705683 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:38:17.708834 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.709332 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.709361 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.709583 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.712195 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.712714 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.712755 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.712924 1076050 provision.go:143] copyHostCerts
	I0127 15:38:17.712990 1076050 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem, removing ...
	I0127 15:38:17.713017 1076050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem
	I0127 15:38:17.713095 1076050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.pem (1078 bytes)
	I0127 15:38:17.713241 1076050 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem, removing ...
	I0127 15:38:17.713259 1076050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem
	I0127 15:38:17.713326 1076050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/cert.pem (1123 bytes)
	I0127 15:38:17.713446 1076050 exec_runner.go:144] found /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem, removing ...
	I0127 15:38:17.713460 1076050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem
	I0127 15:38:17.713500 1076050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20321-1005652/.minikube/key.pem (1679 bytes)
	I0127 15:38:17.713572 1076050 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-405706 san=[127.0.0.1 192.168.72.49 localhost minikube old-k8s-version-405706]
	I0127 15:38:17.976673 1076050 provision.go:177] copyRemoteCerts
	I0127 15:38:17.976750 1076050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 15:38:17.976777 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:17.979513 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.979876 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:17.979909 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:17.980065 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:17.980267 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:17.980415 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:17.980554 1076050 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:38:18.068921 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 15:38:18.098428 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 15:38:18.126079 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 15:38:18.152193 1076050 provision.go:87] duration metric: took 446.842204ms to configureAuth
	I0127 15:38:18.152233 1076050 buildroot.go:189] setting minikube options for container-runtime
	I0127 15:38:18.152508 1076050 config.go:182] Loaded profile config "old-k8s-version-405706": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 15:38:18.152613 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.155796 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.156222 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.156254 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.156368 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.156577 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.156774 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.156938 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.157163 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:18.157375 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:18.157392 1076050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 15:38:18.414989 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 15:38:18.415023 1076050 machine.go:96] duration metric: took 1.100855468s to provisionDockerMachine
	I0127 15:38:18.415039 1076050 start.go:293] postStartSetup for "old-k8s-version-405706" (driver="kvm2")
	I0127 15:38:18.415054 1076050 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 15:38:18.415078 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.415462 1076050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 15:38:18.415499 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.418353 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.418778 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.418818 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.418925 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.419129 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.419322 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.419440 1076050 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:38:14.581198 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:16.581669 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:18.508389 1076050 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 15:38:18.513026 1076050 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 15:38:18.513065 1076050 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/addons for local assets ...
	I0127 15:38:18.513137 1076050 filesync.go:126] Scanning /home/jenkins/minikube-integration/20321-1005652/.minikube/files for local assets ...
	I0127 15:38:18.513210 1076050 filesync.go:149] local asset: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem -> 10128162.pem in /etc/ssl/certs
	I0127 15:38:18.513309 1076050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 15:38:18.523553 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:38:18.550472 1076050 start.go:296] duration metric: took 135.415525ms for postStartSetup
	I0127 15:38:18.550553 1076050 fix.go:56] duration metric: took 19.966860382s for fixHost
	I0127 15:38:18.550584 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.553490 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.553896 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.553956 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.554089 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.554297 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.554458 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.554585 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.554806 1076050 main.go:141] libmachine: Using SSH client type: native
	I0127 15:38:18.555042 1076050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0127 15:38:18.555058 1076050 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 15:38:18.670326 1076050 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737992298.641469796
	
	I0127 15:38:18.670351 1076050 fix.go:216] guest clock: 1737992298.641469796
	I0127 15:38:18.670358 1076050 fix.go:229] Guest: 2025-01-27 15:38:18.641469796 +0000 UTC Remote: 2025-01-27 15:38:18.550560739 +0000 UTC m=+20.130793423 (delta=90.909057ms)
	I0127 15:38:18.670379 1076050 fix.go:200] guest clock delta is within tolerance: 90.909057ms
	I0127 15:38:18.670384 1076050 start.go:83] releasing machines lock for "old-k8s-version-405706", held for 20.08674208s
	I0127 15:38:18.670400 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.670689 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:38:18.673557 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.673931 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.673967 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.674112 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.674583 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.674751 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .DriverName
	I0127 15:38:18.674869 1076050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 15:38:18.674916 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.674944 1076050 ssh_runner.go:195] Run: cat /version.json
	I0127 15:38:18.674975 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHHostname
	I0127 15:38:18.677875 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.678255 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.678395 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.678427 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.678595 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.678749 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:18.678783 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:18.678819 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.679001 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.679093 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHPort
	I0127 15:38:18.679181 1076050 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:38:18.679243 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHKeyPath
	I0127 15:38:18.681217 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetSSHUsername
	I0127 15:38:18.681729 1076050 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/old-k8s-version-405706/id_rsa Username:docker}
	I0127 15:38:18.787808 1076050 ssh_runner.go:195] Run: systemctl --version
	I0127 15:38:18.794834 1076050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 15:38:18.943494 1076050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 15:38:18.950152 1076050 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 15:38:18.950269 1076050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 15:38:18.967110 1076050 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 15:38:18.967141 1076050 start.go:495] detecting cgroup driver to use...
	I0127 15:38:18.967215 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 15:38:18.985631 1076050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 15:38:19.002007 1076050 docker.go:217] disabling cri-docker service (if available) ...
	I0127 15:38:19.002098 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 15:38:19.015975 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 15:38:19.030630 1076050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 15:38:19.167900 1076050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 15:38:19.339595 1076050 docker.go:233] disabling docker service ...
	I0127 15:38:19.339680 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 15:38:19.355894 1076050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 15:38:19.370010 1076050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 15:38:19.503289 1076050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 15:38:19.640006 1076050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 15:38:19.656134 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 15:38:19.676136 1076050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 15:38:19.676207 1076050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:38:19.688127 1076050 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 15:38:19.688235 1076050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:38:19.700866 1076050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:38:19.712387 1076050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 15:38:19.724833 1076050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 15:38:19.736825 1076050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 15:38:19.747906 1076050 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 15:38:19.747976 1076050 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 15:38:19.761744 1076050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 15:38:19.771558 1076050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:38:19.891616 1076050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 15:38:19.987396 1076050 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 15:38:19.987496 1076050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 15:38:19.993148 1076050 start.go:563] Will wait 60s for crictl version
	I0127 15:38:19.993218 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:19.997232 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 15:38:20.047289 1076050 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 15:38:20.047381 1076050 ssh_runner.go:195] Run: crio --version
	I0127 15:38:20.080844 1076050 ssh_runner.go:195] Run: crio --version
	I0127 15:38:20.113498 1076050 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 15:38:18.287782 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:20.288830 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:19.029508 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:21.031738 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:20.115011 1076050 main.go:141] libmachine: (old-k8s-version-405706) Calling .GetIP
	I0127 15:38:20.118087 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:20.118526 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:d6:50", ip: ""} in network mk-old-k8s-version-405706: {Iface:virbr4 ExpiryTime:2025-01-27 16:31:46 +0000 UTC Type:0 Mac:52:54:00:c3:d6:50 Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-405706 Clientid:01:52:54:00:c3:d6:50}
	I0127 15:38:20.118554 1076050 main.go:141] libmachine: (old-k8s-version-405706) DBG | domain old-k8s-version-405706 has defined IP address 192.168.72.49 and MAC address 52:54:00:c3:d6:50 in network mk-old-k8s-version-405706
	I0127 15:38:20.118911 1076050 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 15:38:20.123918 1076050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:38:20.137420 1076050 kubeadm.go:883] updating cluster {Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 15:38:20.137608 1076050 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 15:38:20.137679 1076050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:38:20.203088 1076050 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 15:38:20.203162 1076050 ssh_runner.go:195] Run: which lz4
	I0127 15:38:20.207834 1076050 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 15:38:20.212511 1076050 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 15:38:20.212550 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 15:38:21.944361 1076050 crio.go:462] duration metric: took 1.736570115s to copy over tarball
	I0127 15:38:21.944459 1076050 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 15:38:19.082119 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:21.583597 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:22.786853 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:24.787379 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:26.788848 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:23.529051 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:25.530450 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:25.017812 1076050 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.073312095s)
	I0127 15:38:25.017848 1076050 crio.go:469] duration metric: took 3.07344607s to extract the tarball
	I0127 15:38:25.017859 1076050 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 15:38:25.068609 1076050 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 15:38:25.107660 1076050 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 15:38:25.107705 1076050 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 15:38:25.107797 1076050 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.107831 1076050 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.107843 1076050 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 15:38:25.107782 1076050 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:38:25.107866 1076050 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.107793 1076050 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.107810 1076050 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.107872 1076050 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.109711 1076050 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.109716 1076050 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.109736 1076050 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:38:25.109749 1076050 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 15:38:25.109765 1076050 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.109711 1076050 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.109717 1076050 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.109721 1076050 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.319866 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 15:38:25.320854 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.329418 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.331454 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.331999 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.338125 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.346119 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.438398 1076050 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 15:38:25.438508 1076050 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 15:38:25.438596 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.485875 1076050 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 15:38:25.485939 1076050 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.486002 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.524177 1076050 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 15:38:25.524230 1076050 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.524284 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.533972 1076050 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:38:25.537150 1076050 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 15:38:25.537198 1076050 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.537239 1076050 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 15:38:25.537282 1076050 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.537306 1076050 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 15:38:25.537329 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.537256 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.537388 1076050 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 15:38:25.537334 1076050 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.537413 1076050 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.537430 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.537437 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 15:38:25.537438 1076050 ssh_runner.go:195] Run: which crictl
	I0127 15:38:25.537484 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.537505 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.730245 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.730334 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.730438 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.730438 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 15:38:25.730510 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.730615 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:25.730667 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.896539 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:25.896835 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 15:38:25.896864 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 15:38:25.896869 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:25.896952 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:25.896990 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 15:38:25.897080 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 15:38:26.067159 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 15:38:26.067203 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 15:38:26.067293 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 15:38:26.078064 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 15:38:26.078128 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 15:38:26.078233 1076050 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 15:38:26.078345 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 15:38:26.172870 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 15:38:26.172975 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 15:38:26.177848 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 15:38:26.177943 1076050 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 15:38:26.177981 1076050 cache_images.go:92] duration metric: took 1.070258879s to LoadCachedImages
	W0127 15:38:26.178068 1076050 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0127 15:38:26.178082 1076050 kubeadm.go:934] updating node { 192.168.72.49 8443 v1.20.0 crio true true} ...
	I0127 15:38:26.178211 1076050 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-405706 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 15:38:26.178294 1076050 ssh_runner.go:195] Run: crio config
	I0127 15:38:26.228357 1076050 cni.go:84] Creating CNI manager for ""
	I0127 15:38:26.228379 1076050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:38:26.228388 1076050 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 15:38:26.228409 1076050 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.49 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-405706 NodeName:old-k8s-version-405706 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 15:38:26.228568 1076050 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-405706"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 15:38:26.228657 1076050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 15:38:26.240731 1076050 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 15:38:26.240809 1076050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 15:38:26.251662 1076050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0127 15:38:26.270153 1076050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 15:38:26.292045 1076050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0127 15:38:26.312171 1076050 ssh_runner.go:195] Run: grep 192.168.72.49	control-plane.minikube.internal$ /etc/hosts
	I0127 15:38:26.316436 1076050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 15:38:26.330437 1076050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:38:26.453879 1076050 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:38:26.473364 1076050 certs.go:68] Setting up /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706 for IP: 192.168.72.49
	I0127 15:38:26.473395 1076050 certs.go:194] generating shared ca certs ...
	I0127 15:38:26.473419 1076050 certs.go:226] acquiring lock for ca certs: {Name:mk0f815247e4bef2bde3ddcae95639d1cd9cab24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:38:26.473672 1076050 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key
	I0127 15:38:26.473739 1076050 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key
	I0127 15:38:26.473755 1076050 certs.go:256] generating profile certs ...
	I0127 15:38:26.473909 1076050 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.key
	I0127 15:38:26.473993 1076050 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.key.8816e362
	I0127 15:38:26.474047 1076050 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.key
	I0127 15:38:26.474215 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem (1338 bytes)
	W0127 15:38:26.474262 1076050 certs.go:480] ignoring /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816_empty.pem, impossibly tiny 0 bytes
	I0127 15:38:26.474272 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 15:38:26.474304 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/ca.pem (1078 bytes)
	I0127 15:38:26.474335 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/cert.pem (1123 bytes)
	I0127 15:38:26.474377 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/key.pem (1679 bytes)
	I0127 15:38:26.474434 1076050 certs.go:484] found cert: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem (1708 bytes)
	I0127 15:38:26.475310 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 15:38:26.528151 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 15:38:26.569116 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 15:38:26.612791 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 15:38:26.643362 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 15:38:26.682611 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 15:38:26.736411 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 15:38:26.766171 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 15:38:26.806820 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/ssl/certs/10128162.pem --> /usr/share/ca-certificates/10128162.pem (1708 bytes)
	I0127 15:38:26.835935 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 15:38:26.862752 1076050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20321-1005652/.minikube/certs/1012816.pem --> /usr/share/ca-certificates/1012816.pem (1338 bytes)
	I0127 15:38:26.890713 1076050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 15:38:26.910713 1076050 ssh_runner.go:195] Run: openssl version
	I0127 15:38:26.917762 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1012816.pem && ln -fs /usr/share/ca-certificates/1012816.pem /etc/ssl/certs/1012816.pem"
	I0127 15:38:26.930093 1076050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1012816.pem
	I0127 15:38:26.935103 1076050 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 14:20 /usr/share/ca-certificates/1012816.pem
	I0127 15:38:26.935187 1076050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1012816.pem
	I0127 15:38:26.941655 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1012816.pem /etc/ssl/certs/51391683.0"
	I0127 15:38:26.955281 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10128162.pem && ln -fs /usr/share/ca-certificates/10128162.pem /etc/ssl/certs/10128162.pem"
	I0127 15:38:26.969095 1076050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10128162.pem
	I0127 15:38:26.974104 1076050 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 14:20 /usr/share/ca-certificates/10128162.pem
	I0127 15:38:26.974177 1076050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10128162.pem
	I0127 15:38:26.980428 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10128162.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 15:38:26.992636 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 15:38:27.006632 1076050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:38:27.011797 1076050 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 14:06 /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:38:27.011873 1076050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 15:38:27.018384 1076050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 15:38:27.032120 1076050 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 15:38:27.037441 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 15:38:27.044020 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 15:38:27.050856 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 15:38:27.057896 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 15:38:27.065183 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 15:38:27.072632 1076050 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 15:38:27.079504 1076050 kubeadm.go:392] StartCluster: {Name:old-k8s-version-405706 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-405706 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 15:38:27.079605 1076050 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 15:38:27.079670 1076050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:38:27.122961 1076050 cri.go:89] found id: ""
	I0127 15:38:27.123034 1076050 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 15:38:27.134170 1076050 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 15:38:27.134194 1076050 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 15:38:27.134254 1076050 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 15:38:27.146526 1076050 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 15:38:27.147269 1076050 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-405706" does not appear in /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:38:27.147608 1076050 kubeconfig.go:62] /home/jenkins/minikube-integration/20321-1005652/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-405706" cluster setting kubeconfig missing "old-k8s-version-405706" context setting]
	I0127 15:38:27.148175 1076050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:38:27.218301 1076050 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 15:38:27.230797 1076050 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.49
	I0127 15:38:27.230842 1076050 kubeadm.go:1160] stopping kube-system containers ...
	I0127 15:38:27.230858 1076050 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 15:38:27.230918 1076050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 15:38:27.273845 1076050 cri.go:89] found id: ""
	I0127 15:38:27.273935 1076050 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 15:38:27.295864 1076050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:38:27.308596 1076050 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:38:27.308616 1076050 kubeadm.go:157] found existing configuration files:
	
	I0127 15:38:27.308663 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:38:27.319955 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:38:27.320015 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:38:27.331528 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:38:27.342177 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:38:27.342248 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:38:27.352666 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:38:27.364010 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:38:27.364077 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:38:27.375886 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:38:27.386069 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:38:27.386141 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:38:27.398977 1076050 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:38:27.410085 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:27.579462 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:28.350228 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:24.081574 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:26.084881 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:28.581361 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:29.287085 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:31.288269 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:28.030083 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:30.030174 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:28.604472 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:28.715137 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 15:38:28.812566 1076050 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:38:28.812663 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:29.312952 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:29.812784 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:30.313395 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:30.813525 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:31.313773 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:31.813137 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:32.313501 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:32.813028 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:33.312894 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:31.080211 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:33.582580 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:33.788390 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:36.287173 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:32.529206 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:35.028518 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:37.031307 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:33.813345 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:34.313510 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:34.813678 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:35.313121 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:35.813541 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:36.312890 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:36.813411 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:37.313228 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:37.813599 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:38.313526 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:36.081107 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:38.582581 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:38.287892 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:40.787491 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:39.529329 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:42.028378 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:38.812744 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:39.313501 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:39.813568 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:40.313585 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:40.813078 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:41.312734 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:41.812823 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:42.312829 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:42.813108 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:43.312983 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:41.080457 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:43.082314 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:42.787697 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:45.287260 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:47.287367 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:44.028619 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:46.029083 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:43.813614 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:44.313522 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:44.813162 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:45.313000 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:45.813166 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:46.313147 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:46.812791 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:47.312810 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:47.812775 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:48.313432 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:45.581743 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:47.582153 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:49.287859 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:51.288012 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:48.029471 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:50.529718 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:48.813154 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:49.312838 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:49.813340 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:50.312925 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:50.813287 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:51.312785 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:51.813687 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:52.313111 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:52.812802 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:53.313097 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:50.081002 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:52.581311 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:53.288532 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:55.788221 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:53.028591 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:55.529910 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:53.813587 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:54.313181 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:54.812993 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:55.313464 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:55.813050 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:56.312920 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:56.813705 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:57.313622 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:57.812842 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:58.313381 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:54.581795 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:57.080722 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:58.288309 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:00.786850 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:58.028613 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:00.529908 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:38:58.812816 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:59.312817 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:59.813035 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:00.313444 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:00.813287 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:01.312763 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:01.813721 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:02.313131 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:02.813297 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:03.313697 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:38:59.581769 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:02.080943 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:02.787929 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:05.287833 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:07.287889 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:03.029275 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:05.029418 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:07.030052 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:03.813314 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:04.313147 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:04.813585 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:05.313388 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:05.813722 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:06.313190 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:06.812942 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:07.313516 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:07.813321 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:08.313684 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:04.081681 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:06.582635 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:09.289282 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:11.788208 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:09.528140 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:11.529355 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:08.813457 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:09.312972 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:09.812986 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:10.313838 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:10.813128 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:11.312866 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:11.812982 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:12.312768 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:12.813426 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:13.313370 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:09.080839 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:11.581560 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:14.287327 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:16.288546 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:13.529804 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:16.028749 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:13.812803 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:14.313174 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:14.813162 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:15.312724 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:15.813166 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:16.313662 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:16.813497 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:17.313422 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:17.813587 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:18.313749 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:14.080371 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:16.582575 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:18.584549 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:18.787976 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:20.788184 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:18.029709 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:20.529523 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:18.813301 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:19.313610 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:19.813293 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:20.313667 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:20.813161 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:21.313709 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:21.813699 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:22.313185 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:22.813328 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:23.313612 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:21.080013 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:23.080298 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:23.287582 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:25.787381 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:23.029776 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:25.529747 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:23.812846 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:24.313129 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:24.813728 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:25.313735 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:25.813439 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:26.313406 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:26.813597 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:27.313484 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:27.813672 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:28.313161 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:25.081823 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:27.581035 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:27.787632 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:30.287493 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:32.289889 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:27.530494 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:30.028046 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:32.030227 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:28.813541 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:28.813633 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:28.855334 1076050 cri.go:89] found id: ""
	I0127 15:39:28.855368 1076050 logs.go:282] 0 containers: []
	W0127 15:39:28.855376 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:28.855383 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:28.855466 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:28.892923 1076050 cri.go:89] found id: ""
	I0127 15:39:28.892959 1076050 logs.go:282] 0 containers: []
	W0127 15:39:28.892972 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:28.892980 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:28.893081 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:28.942133 1076050 cri.go:89] found id: ""
	I0127 15:39:28.942163 1076050 logs.go:282] 0 containers: []
	W0127 15:39:28.942187 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:28.942196 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:28.942261 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:28.980950 1076050 cri.go:89] found id: ""
	I0127 15:39:28.980978 1076050 logs.go:282] 0 containers: []
	W0127 15:39:28.980988 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:28.980995 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:28.981080 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:29.022166 1076050 cri.go:89] found id: ""
	I0127 15:39:29.022200 1076050 logs.go:282] 0 containers: []
	W0127 15:39:29.022209 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:29.022215 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:29.022269 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:29.060408 1076050 cri.go:89] found id: ""
	I0127 15:39:29.060439 1076050 logs.go:282] 0 containers: []
	W0127 15:39:29.060447 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:29.060454 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:29.060521 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:29.100890 1076050 cri.go:89] found id: ""
	I0127 15:39:29.100924 1076050 logs.go:282] 0 containers: []
	W0127 15:39:29.100935 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:29.100944 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:29.101075 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:29.139688 1076050 cri.go:89] found id: ""
	I0127 15:39:29.139720 1076050 logs.go:282] 0 containers: []
	W0127 15:39:29.139729 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:29.139741 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:29.139752 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:29.181255 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:29.181288 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:29.232218 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:29.232260 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:29.245853 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:29.245881 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:29.382461 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:29.382487 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:29.382501 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:31.957162 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:31.971225 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:31.971290 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:32.026501 1076050 cri.go:89] found id: ""
	I0127 15:39:32.026535 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.026546 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:32.026555 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:32.026624 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:32.066192 1076050 cri.go:89] found id: ""
	I0127 15:39:32.066232 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.066244 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:32.066253 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:32.066334 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:32.106017 1076050 cri.go:89] found id: ""
	I0127 15:39:32.106047 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.106056 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:32.106062 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:32.106130 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:32.146534 1076050 cri.go:89] found id: ""
	I0127 15:39:32.146565 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.146575 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:32.146581 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:32.146644 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:32.186982 1076050 cri.go:89] found id: ""
	I0127 15:39:32.187007 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.187016 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:32.187022 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:32.187077 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:32.229657 1076050 cri.go:89] found id: ""
	I0127 15:39:32.229685 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.229693 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:32.229700 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:32.229756 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:32.267228 1076050 cri.go:89] found id: ""
	I0127 15:39:32.267259 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.267268 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:32.267275 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:32.267340 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:32.305366 1076050 cri.go:89] found id: ""
	I0127 15:39:32.305394 1076050 logs.go:282] 0 containers: []
	W0127 15:39:32.305402 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:32.305412 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:32.305424 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:32.345293 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:32.345335 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:32.395863 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:32.395922 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:32.411092 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:32.411133 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:32.493214 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:32.493248 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:32.493266 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:30.082518 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:32.580263 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:34.787461 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:37.287358 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:34.530278 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:37.028574 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:35.077133 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:35.094000 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:35.094095 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:35.132448 1076050 cri.go:89] found id: ""
	I0127 15:39:35.132488 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.132500 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:35.132508 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:35.132583 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:35.167599 1076050 cri.go:89] found id: ""
	I0127 15:39:35.167632 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.167644 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:35.167653 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:35.167713 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:35.204383 1076050 cri.go:89] found id: ""
	I0127 15:39:35.204429 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.204438 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:35.204444 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:35.204503 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:35.241382 1076050 cri.go:89] found id: ""
	I0127 15:39:35.241411 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.241423 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:35.241431 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:35.241500 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:35.278253 1076050 cri.go:89] found id: ""
	I0127 15:39:35.278280 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.278289 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:35.278296 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:35.278357 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:35.320389 1076050 cri.go:89] found id: ""
	I0127 15:39:35.320418 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.320425 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:35.320432 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:35.320498 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:35.360563 1076050 cri.go:89] found id: ""
	I0127 15:39:35.360592 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.360604 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:35.360613 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:35.360670 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:35.396537 1076050 cri.go:89] found id: ""
	I0127 15:39:35.396580 1076050 logs.go:282] 0 containers: []
	W0127 15:39:35.396593 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:35.396609 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:35.396628 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:35.474518 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:35.474554 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:35.474575 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:35.554396 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:35.554445 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:35.599042 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:35.599100 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:35.652578 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:35.652619 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:38.167582 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:38.182164 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:38.182250 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:38.218993 1076050 cri.go:89] found id: ""
	I0127 15:39:38.219025 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.219034 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:38.219040 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:38.219121 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:38.257547 1076050 cri.go:89] found id: ""
	I0127 15:39:38.257575 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.257584 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:38.257590 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:38.257643 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:38.295251 1076050 cri.go:89] found id: ""
	I0127 15:39:38.295287 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.295299 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:38.295307 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:38.295378 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:38.339567 1076050 cri.go:89] found id: ""
	I0127 15:39:38.339605 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.339621 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:38.339629 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:38.339697 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:38.375969 1076050 cri.go:89] found id: ""
	I0127 15:39:38.376007 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.376019 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:38.376028 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:38.376097 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:38.429385 1076050 cri.go:89] found id: ""
	I0127 15:39:38.429416 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.429427 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:38.429435 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:38.429503 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:34.587256 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:37.080093 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:39.287413 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:41.287958 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:39.028638 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:41.029306 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:38.481564 1076050 cri.go:89] found id: ""
	I0127 15:39:38.481604 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.481618 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:38.481627 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:38.481700 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:38.535177 1076050 cri.go:89] found id: ""
	I0127 15:39:38.535203 1076050 logs.go:282] 0 containers: []
	W0127 15:39:38.535211 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:38.535223 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:38.535238 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:38.549306 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:38.549349 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:38.622573 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:38.622607 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:38.622625 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:38.697323 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:38.697363 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:38.738950 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:38.738981 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:41.298384 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:41.312088 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:41.312162 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:41.349779 1076050 cri.go:89] found id: ""
	I0127 15:39:41.349808 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.349817 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:41.349824 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:41.349887 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:41.387675 1076050 cri.go:89] found id: ""
	I0127 15:39:41.387715 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.387732 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:41.387740 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:41.387797 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:41.424135 1076050 cri.go:89] found id: ""
	I0127 15:39:41.424166 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.424175 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:41.424181 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:41.424246 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:41.464733 1076050 cri.go:89] found id: ""
	I0127 15:39:41.464760 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.464768 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:41.464774 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:41.464835 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:41.506669 1076050 cri.go:89] found id: ""
	I0127 15:39:41.506700 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.506713 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:41.506725 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:41.506793 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:41.548804 1076050 cri.go:89] found id: ""
	I0127 15:39:41.548833 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.548842 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:41.548848 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:41.548911 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:41.590203 1076050 cri.go:89] found id: ""
	I0127 15:39:41.590233 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.590245 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:41.590253 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:41.590318 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:41.625407 1076050 cri.go:89] found id: ""
	I0127 15:39:41.625434 1076050 logs.go:282] 0 containers: []
	W0127 15:39:41.625442 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:41.625452 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:41.625466 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:41.702765 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:41.702808 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:41.745622 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:41.745662 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:41.799894 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:41.799943 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:41.814151 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:41.814180 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:41.899042 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:39.580910 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:41.581608 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:43.587620 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:43.787400 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:45.787456 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:43.529161 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:46.028736 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:44.399328 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:44.420663 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:44.420731 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:44.484562 1076050 cri.go:89] found id: ""
	I0127 15:39:44.484595 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.484606 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:44.484616 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:44.484681 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:44.555635 1076050 cri.go:89] found id: ""
	I0127 15:39:44.555663 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.555672 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:44.555678 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:44.555730 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:44.598564 1076050 cri.go:89] found id: ""
	I0127 15:39:44.598592 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.598600 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:44.598606 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:44.598663 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:44.639072 1076050 cri.go:89] found id: ""
	I0127 15:39:44.639115 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.639126 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:44.639134 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:44.639200 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:44.677620 1076050 cri.go:89] found id: ""
	I0127 15:39:44.677652 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.677662 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:44.677670 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:44.677730 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:44.714227 1076050 cri.go:89] found id: ""
	I0127 15:39:44.714263 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.714273 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:44.714281 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:44.714357 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:44.753864 1076050 cri.go:89] found id: ""
	I0127 15:39:44.753898 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.753911 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:44.753919 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:44.753987 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:44.790576 1076050 cri.go:89] found id: ""
	I0127 15:39:44.790603 1076050 logs.go:282] 0 containers: []
	W0127 15:39:44.790613 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:44.790625 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:44.790641 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:44.864427 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:44.864468 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:44.904955 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:44.904989 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:44.959074 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:44.959137 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:44.976053 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:44.976082 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:45.062578 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:47.562901 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:47.576665 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:47.576751 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:47.615806 1076050 cri.go:89] found id: ""
	I0127 15:39:47.615842 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.615855 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:47.615864 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:47.615936 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:47.651913 1076050 cri.go:89] found id: ""
	I0127 15:39:47.651947 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.651966 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:47.651974 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:47.652045 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:47.688572 1076050 cri.go:89] found id: ""
	I0127 15:39:47.688604 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.688614 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:47.688620 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:47.688680 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:47.726688 1076050 cri.go:89] found id: ""
	I0127 15:39:47.726725 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.726737 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:47.726745 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:47.726815 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:47.768385 1076050 cri.go:89] found id: ""
	I0127 15:39:47.768413 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.768424 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:47.768433 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:47.768493 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:47.806575 1076050 cri.go:89] found id: ""
	I0127 15:39:47.806601 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.806609 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:47.806615 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:47.806668 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:47.843234 1076050 cri.go:89] found id: ""
	I0127 15:39:47.843259 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.843267 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:47.843273 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:47.843325 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:47.882360 1076050 cri.go:89] found id: ""
	I0127 15:39:47.882398 1076050 logs.go:282] 0 containers: []
	W0127 15:39:47.882411 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:47.882426 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:47.882445 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:47.936678 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:47.936721 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:47.951861 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:47.951889 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:48.027451 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:48.027479 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:48.027497 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:48.110314 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:48.110362 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:46.079379 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:48.081369 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:47.788330 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:50.288398 1074659 pod_ready.go:103] pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:52.281192 1074659 pod_ready.go:82] duration metric: took 4m0.000550048s for pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace to be "Ready" ...
	E0127 15:39:52.281240 1074659 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-cnfrq" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 15:39:52.281264 1074659 pod_ready.go:39] duration metric: took 4m13.057238138s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:39:52.281309 1074659 kubeadm.go:597] duration metric: took 4m21.316884653s to restartPrimaryControlPlane
	W0127 15:39:52.281435 1074659 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 15:39:52.281477 1074659 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:39:48.029038 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:50.529674 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:50.653993 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:50.668077 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:50.668150 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:50.708132 1076050 cri.go:89] found id: ""
	I0127 15:39:50.708160 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.708168 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:50.708175 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:50.708244 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:50.748371 1076050 cri.go:89] found id: ""
	I0127 15:39:50.748400 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.748409 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:50.748415 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:50.748471 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:50.785148 1076050 cri.go:89] found id: ""
	I0127 15:39:50.785183 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.785194 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:50.785202 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:50.785267 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:50.820827 1076050 cri.go:89] found id: ""
	I0127 15:39:50.820864 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.820874 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:50.820881 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:50.820948 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:50.859566 1076050 cri.go:89] found id: ""
	I0127 15:39:50.859602 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.859615 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:50.859623 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:50.859699 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:50.896227 1076050 cri.go:89] found id: ""
	I0127 15:39:50.896263 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.896276 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:50.896285 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:50.896352 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:50.933357 1076050 cri.go:89] found id: ""
	I0127 15:39:50.933393 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.933405 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:50.933414 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:50.933478 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:50.968264 1076050 cri.go:89] found id: ""
	I0127 15:39:50.968303 1076050 logs.go:282] 0 containers: []
	W0127 15:39:50.968313 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:50.968324 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:50.968338 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:51.026708 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:51.026754 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:51.041436 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:51.041475 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:51.110945 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:51.110967 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:51.110980 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:51.192815 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:51.192858 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:50.581346 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:53.080934 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:52.529918 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:55.028235 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:57.029052 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:53.737031 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:53.751175 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:53.751266 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:53.793720 1076050 cri.go:89] found id: ""
	I0127 15:39:53.793748 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.793757 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:53.793764 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:53.793822 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:53.832993 1076050 cri.go:89] found id: ""
	I0127 15:39:53.833065 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.833074 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:53.833080 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:53.833139 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:53.872089 1076050 cri.go:89] found id: ""
	I0127 15:39:53.872122 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.872133 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:53.872147 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:53.872205 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:53.914262 1076050 cri.go:89] found id: ""
	I0127 15:39:53.914298 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.914311 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:53.914321 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:53.914400 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:53.954035 1076050 cri.go:89] found id: ""
	I0127 15:39:53.954073 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.954085 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:53.954093 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:53.954158 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:53.994248 1076050 cri.go:89] found id: ""
	I0127 15:39:53.994306 1076050 logs.go:282] 0 containers: []
	W0127 15:39:53.994320 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:53.994329 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:53.994407 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:54.031811 1076050 cri.go:89] found id: ""
	I0127 15:39:54.031836 1076050 logs.go:282] 0 containers: []
	W0127 15:39:54.031847 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:54.031855 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:54.031917 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:54.070159 1076050 cri.go:89] found id: ""
	I0127 15:39:54.070199 1076050 logs.go:282] 0 containers: []
	W0127 15:39:54.070212 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:54.070225 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:54.070242 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:54.112540 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:54.112575 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:54.163657 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:54.163710 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:54.178720 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:54.178757 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:54.255558 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:54.255596 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:54.255613 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:56.834676 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:56.848186 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:56.848265 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:56.885958 1076050 cri.go:89] found id: ""
	I0127 15:39:56.885984 1076050 logs.go:282] 0 containers: []
	W0127 15:39:56.885993 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:56.885999 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:56.886050 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:39:56.925195 1076050 cri.go:89] found id: ""
	I0127 15:39:56.925233 1076050 logs.go:282] 0 containers: []
	W0127 15:39:56.925247 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:39:56.925256 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:39:56.925328 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:39:56.967597 1076050 cri.go:89] found id: ""
	I0127 15:39:56.967631 1076050 logs.go:282] 0 containers: []
	W0127 15:39:56.967644 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:39:56.967654 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:39:56.967719 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:39:57.005973 1076050 cri.go:89] found id: ""
	I0127 15:39:57.006008 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.006021 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:39:57.006029 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:39:57.006104 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:39:57.042547 1076050 cri.go:89] found id: ""
	I0127 15:39:57.042581 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.042593 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:39:57.042601 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:39:57.042664 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:39:57.084492 1076050 cri.go:89] found id: ""
	I0127 15:39:57.084517 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.084525 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:39:57.084531 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:39:57.084581 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:39:57.120954 1076050 cri.go:89] found id: ""
	I0127 15:39:57.120988 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.121032 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:39:57.121039 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:39:57.121100 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:39:57.159620 1076050 cri.go:89] found id: ""
	I0127 15:39:57.159657 1076050 logs.go:282] 0 containers: []
	W0127 15:39:57.159668 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:39:57.159681 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:39:57.159696 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:39:57.203209 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:39:57.203245 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:39:57.253929 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:39:57.253972 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:39:57.268430 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:39:57.268463 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:39:57.338716 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:39:57.338741 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:39:57.338760 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:39:55.082397 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:57.581203 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:59.528435 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:01.530232 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:39:59.918299 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:39:59.933577 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:39:59.933650 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:39:59.971396 1076050 cri.go:89] found id: ""
	I0127 15:39:59.971437 1076050 logs.go:282] 0 containers: []
	W0127 15:39:59.971449 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:39:59.971457 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:39:59.971516 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:00.012852 1076050 cri.go:89] found id: ""
	I0127 15:40:00.012890 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.012902 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:00.012910 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:00.012983 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:00.053636 1076050 cri.go:89] found id: ""
	I0127 15:40:00.053673 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.053685 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:00.053693 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:00.053757 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:00.091584 1076050 cri.go:89] found id: ""
	I0127 15:40:00.091615 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.091626 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:00.091634 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:00.091698 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:00.126906 1076050 cri.go:89] found id: ""
	I0127 15:40:00.126936 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.126945 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:00.126957 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:00.127012 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:00.164308 1076050 cri.go:89] found id: ""
	I0127 15:40:00.164345 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.164354 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:00.164360 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:00.164412 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:00.200695 1076050 cri.go:89] found id: ""
	I0127 15:40:00.200727 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.200739 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:00.200750 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:00.200807 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:00.239910 1076050 cri.go:89] found id: ""
	I0127 15:40:00.239938 1076050 logs.go:282] 0 containers: []
	W0127 15:40:00.239947 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:00.239958 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:00.239970 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:00.255441 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:00.255468 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:00.333737 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:00.333767 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:00.333782 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:00.417199 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:00.417256 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:00.461683 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:00.461711 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:03.016318 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:03.033626 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:03.033707 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:03.070895 1076050 cri.go:89] found id: ""
	I0127 15:40:03.070929 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.070940 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:03.070948 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:03.071011 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:03.107691 1076050 cri.go:89] found id: ""
	I0127 15:40:03.107725 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.107736 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:03.107742 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:03.107806 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:03.144989 1076050 cri.go:89] found id: ""
	I0127 15:40:03.145032 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.145044 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:03.145052 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:03.145106 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:03.182441 1076050 cri.go:89] found id: ""
	I0127 15:40:03.182473 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.182482 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:03.182488 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:03.182540 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:03.220251 1076050 cri.go:89] found id: ""
	I0127 15:40:03.220286 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.220298 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:03.220306 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:03.220366 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:03.258761 1076050 cri.go:89] found id: ""
	I0127 15:40:03.258799 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.258810 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:03.258818 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:03.258888 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:03.307236 1076050 cri.go:89] found id: ""
	I0127 15:40:03.307274 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.307283 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:03.307289 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:03.307352 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:03.354451 1076050 cri.go:89] found id: ""
	I0127 15:40:03.354487 1076050 logs.go:282] 0 containers: []
	W0127 15:40:03.354498 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:03.354509 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:03.354524 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:03.405369 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:03.405412 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:03.420837 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:03.420866 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 15:40:00.081973 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:02.581659 1074908 pod_ready.go:103] pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:04.030283 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:06.529988 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	W0127 15:40:03.496384 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:03.496420 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:03.496435 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:03.576992 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:03.577066 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:06.128185 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:06.142266 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:06.142381 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:06.181053 1076050 cri.go:89] found id: ""
	I0127 15:40:06.181087 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.181097 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:06.181106 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:06.181162 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:06.218206 1076050 cri.go:89] found id: ""
	I0127 15:40:06.218236 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.218245 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:06.218251 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:06.218304 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:06.255094 1076050 cri.go:89] found id: ""
	I0127 15:40:06.255138 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.255158 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:06.255165 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:06.255221 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:06.295564 1076050 cri.go:89] found id: ""
	I0127 15:40:06.295598 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.295611 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:06.295620 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:06.295683 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:06.332518 1076050 cri.go:89] found id: ""
	I0127 15:40:06.332552 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.332561 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:06.332568 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:06.332641 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:06.371503 1076050 cri.go:89] found id: ""
	I0127 15:40:06.371532 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.371540 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:06.371547 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:06.371599 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:06.409091 1076050 cri.go:89] found id: ""
	I0127 15:40:06.409119 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.409128 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:06.409135 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:06.409192 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:06.445033 1076050 cri.go:89] found id: ""
	I0127 15:40:06.445078 1076050 logs.go:282] 0 containers: []
	W0127 15:40:06.445092 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:06.445113 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:06.445132 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:06.460284 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:06.460321 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:06.543807 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:06.543831 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:06.543844 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:06.626884 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:06.626929 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:06.670309 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:06.670350 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:05.075392 1074908 pod_ready.go:82] duration metric: took 4m0.001148212s for pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace to be "Ready" ...
	E0127 15:40:05.075435 1074908 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-vskgz" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 15:40:05.075460 1074908 pod_ready.go:39] duration metric: took 4m14.604653981s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:05.075504 1074908 kubeadm.go:597] duration metric: took 4m23.17285487s to restartPrimaryControlPlane
	W0127 15:40:05.075610 1074908 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 15:40:05.075649 1074908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:40:09.029666 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:11.529388 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:09.219752 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:09.234460 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:09.234537 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:09.271526 1076050 cri.go:89] found id: ""
	I0127 15:40:09.271574 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.271584 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:09.271590 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:09.271661 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:09.312643 1076050 cri.go:89] found id: ""
	I0127 15:40:09.312681 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.312696 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:09.312705 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:09.312771 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:09.351697 1076050 cri.go:89] found id: ""
	I0127 15:40:09.351736 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.351749 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:09.351757 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:09.351825 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:09.390289 1076050 cri.go:89] found id: ""
	I0127 15:40:09.390315 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.390324 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:09.390332 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:09.390400 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:09.431515 1076050 cri.go:89] found id: ""
	I0127 15:40:09.431548 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.431559 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:09.431567 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:09.431634 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:09.473134 1076050 cri.go:89] found id: ""
	I0127 15:40:09.473170 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.473182 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:09.473190 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:09.473261 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:09.516505 1076050 cri.go:89] found id: ""
	I0127 15:40:09.516542 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.516556 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:09.516564 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:09.516634 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:09.560596 1076050 cri.go:89] found id: ""
	I0127 15:40:09.560638 1076050 logs.go:282] 0 containers: []
	W0127 15:40:09.560649 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:09.560662 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:09.560678 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:09.616174 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:09.616219 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:09.631586 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:09.631622 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:09.706642 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:09.706677 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:09.706696 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:09.780834 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:09.780883 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:12.323632 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:12.337043 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:12.337121 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:12.371851 1076050 cri.go:89] found id: ""
	I0127 15:40:12.371875 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.371884 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:12.371891 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:12.371963 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:12.409962 1076050 cri.go:89] found id: ""
	I0127 15:40:12.409997 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.410010 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:12.410018 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:12.410095 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:12.445440 1076050 cri.go:89] found id: ""
	I0127 15:40:12.445473 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.445482 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:12.445489 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:12.445544 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:12.481239 1076050 cri.go:89] found id: ""
	I0127 15:40:12.481270 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.481282 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:12.481303 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:12.481372 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:12.520832 1076050 cri.go:89] found id: ""
	I0127 15:40:12.520859 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.520867 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:12.520873 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:12.520923 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:12.559781 1076050 cri.go:89] found id: ""
	I0127 15:40:12.559818 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.559829 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:12.559838 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:12.559901 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:12.597821 1076050 cri.go:89] found id: ""
	I0127 15:40:12.597861 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.597873 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:12.597882 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:12.597944 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:12.635939 1076050 cri.go:89] found id: ""
	I0127 15:40:12.635974 1076050 logs.go:282] 0 containers: []
	W0127 15:40:12.635986 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:12.635998 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:12.636013 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:12.709126 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:12.709150 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:12.709163 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:12.792573 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:12.792617 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:12.832327 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:12.832368 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:12.884984 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:12.885039 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:14.028951 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:16.029783 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:15.401225 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:15.415906 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:15.415993 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:15.457989 1076050 cri.go:89] found id: ""
	I0127 15:40:15.458021 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.458031 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:15.458038 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:15.458100 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:15.493789 1076050 cri.go:89] found id: ""
	I0127 15:40:15.493836 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.493852 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:15.493860 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:15.493927 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:15.535193 1076050 cri.go:89] found id: ""
	I0127 15:40:15.535219 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.535227 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:15.535233 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:15.535298 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:15.574983 1076050 cri.go:89] found id: ""
	I0127 15:40:15.575016 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.575030 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:15.575036 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:15.575107 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:15.613038 1076050 cri.go:89] found id: ""
	I0127 15:40:15.613072 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.613083 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:15.613091 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:15.613166 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:15.651439 1076050 cri.go:89] found id: ""
	I0127 15:40:15.651473 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.651483 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:15.651489 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:15.651559 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:15.697895 1076050 cri.go:89] found id: ""
	I0127 15:40:15.697933 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.697945 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:15.697953 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:15.698026 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:15.736368 1076050 cri.go:89] found id: ""
	I0127 15:40:15.736397 1076050 logs.go:282] 0 containers: []
	W0127 15:40:15.736405 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:15.736416 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:15.736431 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:15.788954 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:15.789002 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:15.803162 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:15.803193 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:15.878504 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:15.878538 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:15.878557 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:15.955134 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:15.955186 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:20.131059 1074659 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.849552205s)
	I0127 15:40:20.131159 1074659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:40:20.154965 1074659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:40:20.170718 1074659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:40:20.182783 1074659 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:40:20.182813 1074659 kubeadm.go:157] found existing configuration files:
	
	I0127 15:40:20.182879 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:40:20.196772 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:40:20.196838 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:40:20.219107 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:40:20.231548 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:40:20.231633 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:40:20.243226 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:40:20.262500 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:40:20.262565 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:40:20.273568 1074659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:40:20.283606 1074659 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:40:20.283675 1074659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:40:20.294389 1074659 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:40:20.475280 1074659 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:40:18.529412 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:21.029561 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:18.497724 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:18.519382 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:18.519463 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:18.556458 1076050 cri.go:89] found id: ""
	I0127 15:40:18.556495 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.556504 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:18.556511 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:18.556566 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:18.593672 1076050 cri.go:89] found id: ""
	I0127 15:40:18.593700 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.593717 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:18.593726 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:18.593794 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:18.632353 1076050 cri.go:89] found id: ""
	I0127 15:40:18.632393 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.632404 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:18.632412 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:18.632467 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:18.668613 1076050 cri.go:89] found id: ""
	I0127 15:40:18.668647 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.668659 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:18.668668 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:18.668738 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:18.706751 1076050 cri.go:89] found id: ""
	I0127 15:40:18.706786 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.706798 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:18.706806 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:18.706872 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:18.745670 1076050 cri.go:89] found id: ""
	I0127 15:40:18.745706 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.745719 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:18.745728 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:18.745798 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:18.783666 1076050 cri.go:89] found id: ""
	I0127 15:40:18.783696 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.783708 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:18.783716 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:18.783783 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:18.821591 1076050 cri.go:89] found id: ""
	I0127 15:40:18.821626 1076050 logs.go:282] 0 containers: []
	W0127 15:40:18.821637 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:18.821652 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:18.821669 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:18.895554 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:18.895582 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:18.895600 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:18.977366 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:18.977416 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:19.020341 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:19.020374 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:19.073493 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:19.073537 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:21.589182 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:21.607125 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:21.607245 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:21.654887 1076050 cri.go:89] found id: ""
	I0127 15:40:21.654922 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.654933 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:21.654942 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:21.655013 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:21.703233 1076050 cri.go:89] found id: ""
	I0127 15:40:21.703279 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.703289 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:21.703298 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:21.703440 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:21.744227 1076050 cri.go:89] found id: ""
	I0127 15:40:21.744260 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.744273 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:21.744286 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:21.744356 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:21.786397 1076050 cri.go:89] found id: ""
	I0127 15:40:21.786430 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.786445 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:21.786454 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:21.786517 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:21.831934 1076050 cri.go:89] found id: ""
	I0127 15:40:21.831963 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.831974 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:21.831980 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:21.832036 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:21.877230 1076050 cri.go:89] found id: ""
	I0127 15:40:21.877264 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.877275 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:21.877283 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:21.877351 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:21.923993 1076050 cri.go:89] found id: ""
	I0127 15:40:21.924026 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.924038 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:21.924047 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:21.924109 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:21.963890 1076050 cri.go:89] found id: ""
	I0127 15:40:21.963922 1076050 logs.go:282] 0 containers: []
	W0127 15:40:21.963931 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:21.963942 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:21.963958 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:22.010706 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:22.010743 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:22.070053 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:22.070096 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:22.085574 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:22.085604 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:22.163198 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:22.163228 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:22.163245 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:23.031094 1075160 pod_ready.go:103] pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:24.523077 1075160 pod_ready.go:82] duration metric: took 4m0.001138229s for pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace to be "Ready" ...
	E0127 15:40:24.523130 1075160 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-nj5f8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0127 15:40:24.523156 1075160 pod_ready.go:39] duration metric: took 4m14.040193884s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:24.523186 1075160 kubeadm.go:597] duration metric: took 4m21.511137654s to restartPrimaryControlPlane
	W0127 15:40:24.523251 1075160 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 15:40:24.523280 1075160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:40:24.747046 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:24.761103 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:24.761194 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:24.806570 1076050 cri.go:89] found id: ""
	I0127 15:40:24.806659 1076050 logs.go:282] 0 containers: []
	W0127 15:40:24.806679 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:24.806689 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:24.806755 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:24.854651 1076050 cri.go:89] found id: ""
	I0127 15:40:24.854684 1076050 logs.go:282] 0 containers: []
	W0127 15:40:24.854697 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:24.854705 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:24.854773 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:24.915668 1076050 cri.go:89] found id: ""
	I0127 15:40:24.915705 1076050 logs.go:282] 0 containers: []
	W0127 15:40:24.915718 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:24.915728 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:24.915794 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:24.975570 1076050 cri.go:89] found id: ""
	I0127 15:40:24.975610 1076050 logs.go:282] 0 containers: []
	W0127 15:40:24.975623 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:24.975632 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:24.975704 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:25.025853 1076050 cri.go:89] found id: ""
	I0127 15:40:25.025885 1076050 logs.go:282] 0 containers: []
	W0127 15:40:25.025896 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:25.025903 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:25.025980 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:25.064940 1076050 cri.go:89] found id: ""
	I0127 15:40:25.064976 1076050 logs.go:282] 0 containers: []
	W0127 15:40:25.064987 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:25.064996 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:25.065082 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:25.110507 1076050 cri.go:89] found id: ""
	I0127 15:40:25.110539 1076050 logs.go:282] 0 containers: []
	W0127 15:40:25.110549 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:25.110558 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:25.110622 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:25.150241 1076050 cri.go:89] found id: ""
	I0127 15:40:25.150288 1076050 logs.go:282] 0 containers: []
	W0127 15:40:25.150299 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:25.150313 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:25.150330 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:25.243205 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:25.243238 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:25.243255 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:25.323856 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:25.323900 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:25.367207 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:25.367245 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:25.429072 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:25.429120 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:27.945904 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:27.959618 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:27.959708 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:27.999655 1076050 cri.go:89] found id: ""
	I0127 15:40:27.999685 1076050 logs.go:282] 0 containers: []
	W0127 15:40:27.999697 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:27.999705 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:27.999768 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:28.039662 1076050 cri.go:89] found id: ""
	I0127 15:40:28.039695 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.039708 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:28.039716 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:28.039786 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:28.075418 1076050 cri.go:89] found id: ""
	I0127 15:40:28.075451 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.075462 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:28.075472 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:28.075542 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:28.114964 1076050 cri.go:89] found id: ""
	I0127 15:40:28.115023 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.115036 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:28.115045 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:28.115106 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:28.153086 1076050 cri.go:89] found id: ""
	I0127 15:40:28.153115 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.153126 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:28.153135 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:28.153198 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:28.189564 1076050 cri.go:89] found id: ""
	I0127 15:40:28.189597 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.189607 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:28.189623 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:28.189680 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:28.228037 1076050 cri.go:89] found id: ""
	I0127 15:40:28.228067 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.228076 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:28.228083 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:28.228163 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:28.277124 1076050 cri.go:89] found id: ""
	I0127 15:40:28.277155 1076050 logs.go:282] 0 containers: []
	W0127 15:40:28.277168 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:28.277179 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:28.277192 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:28.340183 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:28.340231 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:28.356822 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:28.356854 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:28.428923 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:28.428951 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:28.428968 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:28.833666 1074659 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 15:40:28.833746 1074659 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:40:28.833840 1074659 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:40:28.833927 1074659 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:40:28.834008 1074659 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 15:40:28.834082 1074659 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:40:28.835576 1074659 out.go:235]   - Generating certificates and keys ...
	I0127 15:40:28.835644 1074659 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:40:28.835701 1074659 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:40:28.835776 1074659 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:40:28.835840 1074659 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:40:28.835918 1074659 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:40:28.835984 1074659 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:40:28.836079 1074659 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:40:28.836170 1074659 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:40:28.836279 1074659 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:40:28.836382 1074659 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:40:28.836440 1074659 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:40:28.836506 1074659 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:40:28.836564 1074659 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:40:28.836645 1074659 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 15:40:28.836728 1074659 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:40:28.836800 1074659 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:40:28.836889 1074659 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:40:28.836973 1074659 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:40:28.837079 1074659 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:40:28.838668 1074659 out.go:235]   - Booting up control plane ...
	I0127 15:40:28.838772 1074659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:40:28.838882 1074659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:40:28.838967 1074659 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:40:28.839120 1074659 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:40:28.839212 1074659 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:40:28.839261 1074659 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:40:28.839412 1074659 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 15:40:28.839527 1074659 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 15:40:28.839621 1074659 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.133738ms
	I0127 15:40:28.839718 1074659 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 15:40:28.839793 1074659 kubeadm.go:310] [api-check] The API server is healthy after 5.001467165s
	I0127 15:40:28.839883 1074659 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 15:40:28.840019 1074659 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 15:40:28.840098 1074659 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 15:40:28.840257 1074659 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-458006 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 15:40:28.840304 1074659 kubeadm.go:310] [bootstrap-token] Using token: ysn4g1.5k9s54b5xvzc8py2
	I0127 15:40:28.841707 1074659 out.go:235]   - Configuring RBAC rules ...
	I0127 15:40:28.841821 1074659 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 15:40:28.841908 1074659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 15:40:28.842072 1074659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 15:40:28.842254 1074659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 15:40:28.842425 1074659 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 15:40:28.842542 1074659 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 15:40:28.842654 1074659 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 15:40:28.842695 1074659 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 15:40:28.842739 1074659 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 15:40:28.842746 1074659 kubeadm.go:310] 
	I0127 15:40:28.842794 1074659 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 15:40:28.842803 1074659 kubeadm.go:310] 
	I0127 15:40:28.842866 1074659 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 15:40:28.842878 1074659 kubeadm.go:310] 
	I0127 15:40:28.842923 1074659 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 15:40:28.843010 1074659 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 15:40:28.843103 1074659 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 15:40:28.843112 1074659 kubeadm.go:310] 
	I0127 15:40:28.843207 1074659 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 15:40:28.843222 1074659 kubeadm.go:310] 
	I0127 15:40:28.843297 1074659 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 15:40:28.843312 1074659 kubeadm.go:310] 
	I0127 15:40:28.843389 1074659 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 15:40:28.843486 1074659 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 15:40:28.843560 1074659 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 15:40:28.843568 1074659 kubeadm.go:310] 
	I0127 15:40:28.843641 1074659 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 15:40:28.843710 1074659 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 15:40:28.843716 1074659 kubeadm.go:310] 
	I0127 15:40:28.843788 1074659 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ysn4g1.5k9s54b5xvzc8py2 \
	I0127 15:40:28.843875 1074659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 \
	I0127 15:40:28.843899 1074659 kubeadm.go:310] 	--control-plane 
	I0127 15:40:28.843908 1074659 kubeadm.go:310] 
	I0127 15:40:28.844015 1074659 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 15:40:28.844024 1074659 kubeadm.go:310] 
	I0127 15:40:28.844090 1074659 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ysn4g1.5k9s54b5xvzc8py2 \
	I0127 15:40:28.844200 1074659 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 
	I0127 15:40:28.844221 1074659 cni.go:84] Creating CNI manager for ""
	I0127 15:40:28.844233 1074659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:40:28.845800 1074659 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 15:40:28.847251 1074659 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 15:40:28.858165 1074659 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 15:40:28.881328 1074659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 15:40:28.881400 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:28.881455 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-458006 minikube.k8s.io/updated_at=2025_01_27T15_40_28_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743 minikube.k8s.io/name=no-preload-458006 minikube.k8s.io/primary=true
	I0127 15:40:28.897996 1074659 ops.go:34] apiserver oom_adj: -16
	I0127 15:40:29.095553 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:29.596344 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:30.096320 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:30.596512 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:31.096689 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:31.596534 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:32.096361 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:32.595892 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:33.095702 1074659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:33.238790 1074659 kubeadm.go:1113] duration metric: took 4.357463541s to wait for elevateKubeSystemPrivileges
	I0127 15:40:33.238848 1074659 kubeadm.go:394] duration metric: took 5m2.327511742s to StartCluster
	I0127 15:40:33.238888 1074659 settings.go:142] acquiring lock: {Name:mk76722a8563186e0a733747d866232f054026c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:40:33.239099 1074659 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:40:33.240861 1074659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:40:33.241710 1074659 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.30 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 15:40:33.241765 1074659 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 15:40:33.241896 1074659 addons.go:69] Setting storage-provisioner=true in profile "no-preload-458006"
	I0127 15:40:33.241924 1074659 addons.go:238] Setting addon storage-provisioner=true in "no-preload-458006"
	W0127 15:40:33.241936 1074659 addons.go:247] addon storage-provisioner should already be in state true
	I0127 15:40:33.241970 1074659 config.go:182] Loaded profile config "no-preload-458006": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:40:33.241993 1074659 host.go:66] Checking if "no-preload-458006" exists ...
	I0127 15:40:33.242098 1074659 addons.go:69] Setting default-storageclass=true in profile "no-preload-458006"
	I0127 15:40:33.242136 1074659 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-458006"
	I0127 15:40:33.242491 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.242558 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.242562 1074659 addons.go:69] Setting dashboard=true in profile "no-preload-458006"
	I0127 15:40:33.242579 1074659 addons.go:238] Setting addon dashboard=true in "no-preload-458006"
	W0127 15:40:33.242587 1074659 addons.go:247] addon dashboard should already be in state true
	I0127 15:40:33.242619 1074659 host.go:66] Checking if "no-preload-458006" exists ...
	I0127 15:40:33.242642 1074659 addons.go:69] Setting metrics-server=true in profile "no-preload-458006"
	I0127 15:40:33.242681 1074659 addons.go:238] Setting addon metrics-server=true in "no-preload-458006"
	W0127 15:40:33.242703 1074659 addons.go:247] addon metrics-server should already be in state true
	I0127 15:40:33.242748 1074659 host.go:66] Checking if "no-preload-458006" exists ...
	I0127 15:40:33.242982 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.243002 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.243017 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.243038 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.243162 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.243195 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.246220 1074659 out.go:177] * Verifying Kubernetes components...
	I0127 15:40:33.247844 1074659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:40:33.260866 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42951
	I0127 15:40:33.260900 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35127
	I0127 15:40:33.260867 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I0127 15:40:33.261687 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.261705 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.261805 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39555
	I0127 15:40:33.262293 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.262298 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.262311 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.262320 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.262394 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.262663 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.262770 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.262824 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.262973 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.262988 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.263265 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.263294 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.263301 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.263705 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.263777 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.263793 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.264103 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.264138 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.264160 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.265173 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.265220 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.266841 1074659 addons.go:238] Setting addon default-storageclass=true in "no-preload-458006"
	W0127 15:40:33.266861 1074659 addons.go:247] addon default-storageclass should already be in state true
	I0127 15:40:33.266882 1074659 host.go:66] Checking if "no-preload-458006" exists ...
	I0127 15:40:33.267142 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.267186 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.284237 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34477
	I0127 15:40:33.284787 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.285432 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.285458 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.285817 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.286054 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.288006 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:40:33.288915 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36501
	I0127 15:40:33.289278 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44703
	I0127 15:40:33.289464 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.289551 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.290021 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.290033 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.290128 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.290135 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.290430 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.290487 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.290488 1074659 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 15:40:33.290680 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.290956 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.293313 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:40:33.293608 1074659 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 15:40:33.293756 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:40:33.295556 1074659 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 15:40:33.295557 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 15:40:33.295679 1074659 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 15:40:33.295688 1074659 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:40:32.977057 1074908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.901370931s)
	I0127 15:40:32.977156 1074908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:40:32.998093 1074908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:40:33.014544 1074908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:40:33.041108 1074908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:40:33.041138 1074908 kubeadm.go:157] found existing configuration files:
	
	I0127 15:40:33.041203 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:40:33.058390 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:40:33.058462 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:40:33.070074 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:40:33.087447 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:40:33.087524 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:40:33.101890 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:40:33.112384 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:40:33.112460 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:40:33.122774 1074908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:40:33.133115 1074908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:40:33.133183 1074908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:40:33.143719 1074908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:40:33.201432 1074908 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 15:40:33.201519 1074908 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:40:33.371439 1074908 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:40:33.371619 1074908 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:40:33.371746 1074908 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 15:40:33.380800 1074908 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:40:28.505128 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:28.505170 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:31.047029 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:31.060582 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:31.060685 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:31.097127 1076050 cri.go:89] found id: ""
	I0127 15:40:31.097150 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.097160 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:31.097168 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:31.097230 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:31.134764 1076050 cri.go:89] found id: ""
	I0127 15:40:31.134799 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.134810 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:31.134818 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:31.134900 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:31.174779 1076050 cri.go:89] found id: ""
	I0127 15:40:31.174807 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.174816 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:31.174822 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:31.174875 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:31.215471 1076050 cri.go:89] found id: ""
	I0127 15:40:31.215503 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.215513 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:31.215519 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:31.215572 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:31.253765 1076050 cri.go:89] found id: ""
	I0127 15:40:31.253796 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.253804 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:31.253811 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:31.253867 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:31.297130 1076050 cri.go:89] found id: ""
	I0127 15:40:31.297161 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.297170 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:31.297176 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:31.297240 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:31.335280 1076050 cri.go:89] found id: ""
	I0127 15:40:31.335315 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.335326 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:31.335334 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:31.335406 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:31.372619 1076050 cri.go:89] found id: ""
	I0127 15:40:31.372652 1076050 logs.go:282] 0 containers: []
	W0127 15:40:31.372664 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:31.372678 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:31.372693 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:31.427666 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:31.427709 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:31.442810 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:31.442842 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:31.511297 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:31.511330 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:31.511354 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:31.595122 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:31.595168 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:33.383521 1074908 out.go:235]   - Generating certificates and keys ...
	I0127 15:40:33.383651 1074908 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:40:33.383757 1074908 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:40:33.383895 1074908 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:40:33.383985 1074908 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:40:33.384074 1074908 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:40:33.384147 1074908 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:40:33.384245 1074908 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:40:33.384323 1074908 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:40:33.384413 1074908 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:40:33.384510 1074908 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:40:33.384563 1074908 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:40:33.384642 1074908 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:40:33.553965 1074908 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:40:33.739507 1074908 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 15:40:33.994637 1074908 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:40:34.154265 1074908 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:40:34.373069 1074908 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:40:34.373791 1074908 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:40:34.379843 1074908 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:40:33.295709 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:40:33.297475 1074659 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 15:40:33.297501 1074659 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 15:40:33.297523 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:40:33.300714 1074659 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:40:33.300736 1074659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 15:40:33.300756 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:40:33.301635 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38107
	I0127 15:40:33.302333 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.302863 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.302880 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.303349 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.303970 1074659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:33.304013 1074659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:33.305284 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.305834 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:40:33.305864 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.306025 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:40:33.306086 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.306246 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:40:33.306406 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:40:33.306488 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.306592 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:40:33.309540 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:40:33.309565 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.309810 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:40:33.310021 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:40:33.310146 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:40:33.310163 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.310320 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:40:33.310404 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:40:33.310566 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:40:33.310593 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:40:33.310786 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:40:33.310945 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:40:33.329960 1074659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0127 15:40:33.330745 1074659 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:33.331477 1074659 main.go:141] libmachine: Using API Version  1
	I0127 15:40:33.331497 1074659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:33.331931 1074659 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:33.332248 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetState
	I0127 15:40:33.334148 1074659 main.go:141] libmachine: (no-preload-458006) Calling .DriverName
	I0127 15:40:33.337343 1074659 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 15:40:33.337364 1074659 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 15:40:33.337387 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHHostname
	I0127 15:40:33.344679 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.345163 1074659 main.go:141] libmachine: (no-preload-458006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:b5:94", ip: ""} in network mk-no-preload-458006: {Iface:virbr1 ExpiryTime:2025-01-27 16:35:04 +0000 UTC Type:0 Mac:52:54:00:4f:b5:94 Iaid: IPaddr:192.168.50.30 Prefix:24 Hostname:no-preload-458006 Clientid:01:52:54:00:4f:b5:94}
	I0127 15:40:33.345261 1074659 main.go:141] libmachine: (no-preload-458006) DBG | domain no-preload-458006 has defined IP address 192.168.50.30 and MAC address 52:54:00:4f:b5:94 in network mk-no-preload-458006
	I0127 15:40:33.345521 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHPort
	I0127 15:40:33.345738 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHKeyPath
	I0127 15:40:33.345938 1074659 main.go:141] libmachine: (no-preload-458006) Calling .GetSSHUsername
	I0127 15:40:33.346117 1074659 sshutil.go:53] new ssh client: &{IP:192.168.50.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/no-preload-458006/id_rsa Username:docker}
	I0127 15:40:33.464899 1074659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:40:33.489798 1074659 node_ready.go:35] waiting up to 6m0s for node "no-preload-458006" to be "Ready" ...
	I0127 15:40:33.523407 1074659 node_ready.go:49] node "no-preload-458006" has status "Ready":"True"
	I0127 15:40:33.523440 1074659 node_ready.go:38] duration metric: took 33.61111ms for node "no-preload-458006" to be "Ready" ...
	I0127 15:40:33.523453 1074659 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:33.535257 1074659 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:33.568512 1074659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 15:40:33.587974 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 15:40:33.588003 1074659 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 15:40:33.619075 1074659 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 15:40:33.619099 1074659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 15:40:33.633023 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 15:40:33.633068 1074659 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 15:40:33.642970 1074659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:40:33.657566 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 15:40:33.657595 1074659 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 15:40:33.664558 1074659 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 15:40:33.664588 1074659 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 15:40:33.687856 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 15:40:33.687883 1074659 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 15:40:33.714005 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 15:40:33.714036 1074659 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 15:40:33.727527 1074659 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:40:33.727554 1074659 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 15:40:33.764439 1074659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:40:33.790606 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 15:40:33.790639 1074659 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 15:40:33.826641 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:33.826674 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:33.827044 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:33.827065 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:33.827075 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:33.827083 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:33.827331 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:33.827363 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:33.827373 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:33.834226 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:33.834269 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:33.834561 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:33.834578 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:33.867815 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 15:40:33.867848 1074659 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 15:40:33.891318 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 15:40:33.891362 1074659 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 15:40:33.964578 1074659 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:40:33.964616 1074659 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 15:40:34.002418 1074659 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:40:34.279743 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:34.279829 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:34.280331 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:34.280397 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:34.280425 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:34.280447 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:34.280473 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:34.280769 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:34.280818 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:34.280833 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:34.817958 1074659 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.053479215s)
	I0127 15:40:34.818069 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:34.818092 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:34.818435 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:34.818495 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:34.818509 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:34.818518 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:34.818778 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:34.818799 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:34.818811 1074659 addons.go:479] Verifying addon metrics-server=true in "no-preload-458006"
	I0127 15:40:35.547309 1074659 pod_ready.go:103] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:36.514576 1074659 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.512097478s)
	I0127 15:40:36.514647 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:36.514666 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:36.515033 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:36.515046 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:36.515111 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:36.515130 1074659 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:36.515153 1074659 main.go:141] libmachine: (no-preload-458006) Calling .Close
	I0127 15:40:36.515488 1074659 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:36.515527 1074659 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:36.515503 1074659 main.go:141] libmachine: (no-preload-458006) DBG | Closing plugin on server side
	I0127 15:40:36.517645 1074659 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-458006 addons enable metrics-server
	
	I0127 15:40:36.519535 1074659 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 15:40:36.520964 1074659 addons.go:514] duration metric: took 3.279215802s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 15:40:34.138287 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:34.156651 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:34.156734 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:34.194604 1076050 cri.go:89] found id: ""
	I0127 15:40:34.194647 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.194658 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:34.194666 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:34.194729 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:34.233299 1076050 cri.go:89] found id: ""
	I0127 15:40:34.233353 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.233363 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:34.233369 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:34.233423 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:34.274424 1076050 cri.go:89] found id: ""
	I0127 15:40:34.274453 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.274465 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:34.274473 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:34.274539 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:34.317113 1076050 cri.go:89] found id: ""
	I0127 15:40:34.317144 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.317155 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:34.317168 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:34.317239 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:34.359212 1076050 cri.go:89] found id: ""
	I0127 15:40:34.359242 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.359252 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:34.359261 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:34.359328 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:34.398773 1076050 cri.go:89] found id: ""
	I0127 15:40:34.398805 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.398824 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:34.398833 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:34.398910 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:34.440053 1076050 cri.go:89] found id: ""
	I0127 15:40:34.440087 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.440099 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:34.440107 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:34.440178 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:34.482908 1076050 cri.go:89] found id: ""
	I0127 15:40:34.482943 1076050 logs.go:282] 0 containers: []
	W0127 15:40:34.482959 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:34.482973 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:34.482992 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:34.500178 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:34.500206 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:34.580251 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:34.580279 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:34.580302 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:34.673730 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:34.673772 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:34.720797 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:34.720838 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:37.282487 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:37.300162 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:37.300231 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:37.348753 1076050 cri.go:89] found id: ""
	I0127 15:40:37.348786 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.348798 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:37.348806 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:37.348870 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:37.398630 1076050 cri.go:89] found id: ""
	I0127 15:40:37.398669 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.398681 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:37.398689 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:37.398761 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:37.437030 1076050 cri.go:89] found id: ""
	I0127 15:40:37.437127 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.437155 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:37.437188 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:37.437277 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:37.477745 1076050 cri.go:89] found id: ""
	I0127 15:40:37.477837 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.477855 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:37.477864 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:37.477937 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:37.514259 1076050 cri.go:89] found id: ""
	I0127 15:40:37.514292 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.514302 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:37.514311 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:37.514385 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:37.551313 1076050 cri.go:89] found id: ""
	I0127 15:40:37.551349 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.551359 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:37.551367 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:37.551427 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:37.593740 1076050 cri.go:89] found id: ""
	I0127 15:40:37.593772 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.593783 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:37.593791 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:37.593854 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:37.634133 1076050 cri.go:89] found id: ""
	I0127 15:40:37.634169 1076050 logs.go:282] 0 containers: []
	W0127 15:40:37.634181 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:37.634194 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:37.634217 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:37.699046 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:37.699092 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:37.717470 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:37.717512 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:37.791051 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:37.791077 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:37.791106 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:37.882694 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:37.882742 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:34.381325 1074908 out.go:235]   - Booting up control plane ...
	I0127 15:40:34.381471 1074908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:40:34.381579 1074908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:40:34.382092 1074908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:40:34.406494 1074908 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:40:34.413899 1074908 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:40:34.414029 1074908 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:40:34.583151 1074908 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 15:40:34.583269 1074908 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 15:40:35.584905 1074908 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001687336s
	I0127 15:40:35.585033 1074908 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 15:40:40.587681 1074908 kubeadm.go:310] [api-check] The API server is healthy after 5.001284493s
	I0127 15:40:40.610814 1074908 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 15:40:40.631959 1074908 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 15:40:40.691115 1074908 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 15:40:40.691368 1074908 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-349782 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 15:40:40.717976 1074908 kubeadm.go:310] [bootstrap-token] Using token: 2miseq.yzn49d7krpbx0jxu
	I0127 15:40:40.719603 1074908 out.go:235]   - Configuring RBAC rules ...
	I0127 15:40:40.719764 1074908 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 15:40:40.734536 1074908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 15:40:40.754140 1074908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 15:40:40.763500 1074908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 15:40:40.769897 1074908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 15:40:40.777335 1074908 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 15:40:40.995105 1074908 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 15:40:41.449029 1074908 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 15:40:41.995223 1074908 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 15:40:41.996543 1074908 kubeadm.go:310] 
	I0127 15:40:41.996660 1074908 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 15:40:41.996672 1074908 kubeadm.go:310] 
	I0127 15:40:41.996788 1074908 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 15:40:41.996798 1074908 kubeadm.go:310] 
	I0127 15:40:41.996838 1074908 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 15:40:41.996921 1074908 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 15:40:41.996994 1074908 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 15:40:41.997025 1074908 kubeadm.go:310] 
	I0127 15:40:41.997151 1074908 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 15:40:41.997173 1074908 kubeadm.go:310] 
	I0127 15:40:41.997241 1074908 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 15:40:41.997253 1074908 kubeadm.go:310] 
	I0127 15:40:41.997329 1074908 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 15:40:41.997435 1074908 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 15:40:41.997539 1074908 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 15:40:41.997547 1074908 kubeadm.go:310] 
	I0127 15:40:41.997672 1074908 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 15:40:41.997777 1074908 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 15:40:41.997789 1074908 kubeadm.go:310] 
	I0127 15:40:41.997873 1074908 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2miseq.yzn49d7krpbx0jxu \
	I0127 15:40:41.997954 1074908 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 \
	I0127 15:40:41.997974 1074908 kubeadm.go:310] 	--control-plane 
	I0127 15:40:41.997980 1074908 kubeadm.go:310] 
	I0127 15:40:41.998045 1074908 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 15:40:41.998056 1074908 kubeadm.go:310] 
	I0127 15:40:41.998117 1074908 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2miseq.yzn49d7krpbx0jxu \
	I0127 15:40:41.998204 1074908 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 
	I0127 15:40:41.999397 1074908 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:40:41.999437 1074908 cni.go:84] Creating CNI manager for ""
	I0127 15:40:41.999448 1074908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:40:42.001383 1074908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 15:40:38.042609 1074659 pod_ready.go:103] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:40.046811 1074659 pod_ready.go:103] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:40.431585 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:40.449664 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:40.449766 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:40.500904 1076050 cri.go:89] found id: ""
	I0127 15:40:40.500995 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.501020 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:40.501029 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:40.501103 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:40.543907 1076050 cri.go:89] found id: ""
	I0127 15:40:40.543939 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.543950 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:40.543958 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:40.544018 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:40.592294 1076050 cri.go:89] found id: ""
	I0127 15:40:40.592328 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.592339 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:40.592352 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:40.592418 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:40.641396 1076050 cri.go:89] found id: ""
	I0127 15:40:40.641429 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.641439 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:40.641449 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:40.641522 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:40.687151 1076050 cri.go:89] found id: ""
	I0127 15:40:40.687185 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.687197 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:40.687206 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:40.687279 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:40.728537 1076050 cri.go:89] found id: ""
	I0127 15:40:40.728573 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.728584 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:40.728593 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:40.728666 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:40.770995 1076050 cri.go:89] found id: ""
	I0127 15:40:40.771022 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.771035 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:40.771042 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:40.771108 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:40.818299 1076050 cri.go:89] found id: ""
	I0127 15:40:40.818332 1076050 logs.go:282] 0 containers: []
	W0127 15:40:40.818344 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:40.818357 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:40.818379 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:40.835538 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:40.835566 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:40.912785 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:40.912812 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:40.912829 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:41.029124 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:41.029177 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:41.088618 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:41.088649 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:42.002886 1074908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 15:40:42.019774 1074908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 15:40:42.041710 1074908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 15:40:42.041880 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:42.042011 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-349782 minikube.k8s.io/updated_at=2025_01_27T15_40_42_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743 minikube.k8s.io/name=embed-certs-349782 minikube.k8s.io/primary=true
	I0127 15:40:42.071903 1074908 ops.go:34] apiserver oom_adj: -16
	I0127 15:40:42.298644 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:42.799727 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:43.299289 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:43.799485 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:44.299597 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:44.799559 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:45.299631 1074908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:45.388381 1074908 kubeadm.go:1113] duration metric: took 3.346560313s to wait for elevateKubeSystemPrivileges
	I0127 15:40:45.388421 1074908 kubeadm.go:394] duration metric: took 5m3.554845692s to StartCluster
	I0127 15:40:45.388444 1074908 settings.go:142] acquiring lock: {Name:mk76722a8563186e0a733747d866232f054026c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:40:45.388536 1074908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:40:45.390768 1074908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:40:45.391081 1074908 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.43 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 15:40:45.391145 1074908 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 15:40:45.391269 1074908 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-349782"
	I0127 15:40:45.391288 1074908 addons.go:69] Setting dashboard=true in profile "embed-certs-349782"
	I0127 15:40:45.391320 1074908 addons.go:238] Setting addon dashboard=true in "embed-certs-349782"
	I0127 15:40:45.391319 1074908 addons.go:69] Setting metrics-server=true in profile "embed-certs-349782"
	I0127 15:40:45.391294 1074908 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-349782"
	I0127 15:40:45.391334 1074908 config.go:182] Loaded profile config "embed-certs-349782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:40:45.391343 1074908 addons.go:238] Setting addon metrics-server=true in "embed-certs-349782"
	W0127 15:40:45.391353 1074908 addons.go:247] addon metrics-server should already be in state true
	W0127 15:40:45.391330 1074908 addons.go:247] addon dashboard should already be in state true
	W0127 15:40:45.391338 1074908 addons.go:247] addon storage-provisioner should already be in state true
	I0127 15:40:45.391406 1074908 host.go:66] Checking if "embed-certs-349782" exists ...
	I0127 15:40:45.391417 1074908 host.go:66] Checking if "embed-certs-349782" exists ...
	I0127 15:40:45.391276 1074908 addons.go:69] Setting default-storageclass=true in profile "embed-certs-349782"
	I0127 15:40:45.391503 1074908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-349782"
	I0127 15:40:45.391386 1074908 host.go:66] Checking if "embed-certs-349782" exists ...
	I0127 15:40:45.391836 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.391838 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.391876 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.391925 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.391951 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.391954 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.391982 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.392044 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.396751 1074908 out.go:177] * Verifying Kubernetes components...
	I0127 15:40:45.398763 1074908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:40:45.411089 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33133
	I0127 15:40:45.411341 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0127 15:40:45.411740 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.411839 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.412321 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.412348 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.412429 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45519
	I0127 15:40:45.412455 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.412471 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.412710 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.412921 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.413145 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34947
	I0127 15:40:45.413359 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.413399 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.413439 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.413451 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.413623 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.413854 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.413991 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.414216 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.414233 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.414273 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.414298 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.414583 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.414766 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.414772 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.414845 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.418728 1074908 addons.go:238] Setting addon default-storageclass=true in "embed-certs-349782"
	W0127 15:40:45.418755 1074908 addons.go:247] addon default-storageclass should already be in state true
	I0127 15:40:45.418787 1074908 host.go:66] Checking if "embed-certs-349782" exists ...
	I0127 15:40:45.419153 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.419189 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.436563 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41273
	I0127 15:40:45.437032 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.437309 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44033
	I0127 15:40:45.437764 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.437783 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.437859 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38101
	I0127 15:40:45.437986 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.438180 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.438423 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.438439 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.438503 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.438549 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.439042 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.439059 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.439120 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.439496 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.439564 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.440296 1074908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:40:45.440349 1074908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:40:45.440835 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:40:45.441539 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41231
	I0127 15:40:45.442136 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.442687 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:40:45.443524 1074908 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 15:40:45.443584 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.443599 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.443863 1074908 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 15:40:45.443950 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.444664 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.445476 1074908 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 15:40:45.445498 1074908 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 15:40:45.445531 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:40:45.446460 1074908 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 15:40:45.446697 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:40:45.451306 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 15:40:45.456066 1074908 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 15:40:45.452788 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.456096 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:40:45.454144 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:40:45.456132 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:40:45.456169 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.456379 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:40:45.456396 1074908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:40:42.547331 1074659 pod_ready.go:103] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:44.081830 1074659 pod_ready.go:93] pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.081865 1074659 pod_ready.go:82] duration metric: took 10.546579527s for pod "coredns-668d6bf9bc-sp7p4" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.081882 1074659 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-xgx78" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.097962 1074659 pod_ready.go:93] pod "coredns-668d6bf9bc-xgx78" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.097994 1074659 pod_ready.go:82] duration metric: took 16.102725ms for pod "coredns-668d6bf9bc-xgx78" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.098014 1074659 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.117810 1074659 pod_ready.go:93] pod "etcd-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.117845 1074659 pod_ready.go:82] duration metric: took 19.821766ms for pod "etcd-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.117861 1074659 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.147522 1074659 pod_ready.go:93] pod "kube-apiserver-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.147557 1074659 pod_ready.go:82] duration metric: took 29.685956ms for pod "kube-apiserver-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.147573 1074659 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.163535 1074659 pod_ready.go:93] pod "kube-controller-manager-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.163570 1074659 pod_ready.go:82] duration metric: took 15.987018ms for pod "kube-controller-manager-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.163585 1074659 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6j6r5" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.440133 1074659 pod_ready.go:93] pod "kube-proxy-6j6r5" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.440165 1074659 pod_ready.go:82] duration metric: took 276.571766ms for pod "kube-proxy-6j6r5" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.440180 1074659 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.865610 1074659 pod_ready.go:93] pod "kube-scheduler-no-preload-458006" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:44.865643 1074659 pod_ready.go:82] duration metric: took 425.453541ms for pod "kube-scheduler-no-preload-458006" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:44.865655 1074659 pod_ready.go:39] duration metric: took 11.34218973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:44.865682 1074659 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:40:44.865746 1074659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:44.906758 1074659 api_server.go:72] duration metric: took 11.665005612s to wait for apiserver process to appear ...
	I0127 15:40:44.906793 1074659 api_server.go:88] waiting for apiserver healthz status ...
	I0127 15:40:44.906829 1074659 api_server.go:253] Checking apiserver healthz at https://192.168.50.30:8443/healthz ...
	I0127 15:40:44.912296 1074659 api_server.go:279] https://192.168.50.30:8443/healthz returned 200:
	ok
	I0127 15:40:44.913396 1074659 api_server.go:141] control plane version: v1.32.1
	I0127 15:40:44.913416 1074659 api_server.go:131] duration metric: took 6.606206ms to wait for apiserver health ...
	I0127 15:40:44.913424 1074659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 15:40:45.045967 1074659 system_pods.go:59] 9 kube-system pods found
	I0127 15:40:45.046012 1074659 system_pods.go:61] "coredns-668d6bf9bc-sp7p4" [7fbb8eca-e2e6-4760-a0b6-8c6387fe9960] Running
	I0127 15:40:45.046020 1074659 system_pods.go:61] "coredns-668d6bf9bc-xgx78" [c3cc3887-d694-4b39-9ad1-c03fcf97b608] Running
	I0127 15:40:45.046025 1074659 system_pods.go:61] "etcd-no-preload-458006" [2474c045-aaa4-4190-8392-3dea1976ded1] Running
	I0127 15:40:45.046031 1074659 system_pods.go:61] "kube-apiserver-no-preload-458006" [2529a3ec-c6a0-4cc7-b93a-7964e435ada0] Running
	I0127 15:40:45.046038 1074659 system_pods.go:61] "kube-controller-manager-no-preload-458006" [989d2483-4dc3-4add-ad64-7f76d4b5c765] Running
	I0127 15:40:45.046043 1074659 system_pods.go:61] "kube-proxy-6j6r5" [3ca06a87-654b-42c2-ac04-12d9b0472973] Running
	I0127 15:40:45.046047 1074659 system_pods.go:61] "kube-scheduler-no-preload-458006" [f6afe797-0eed-4f54-8ed6-fbe75d411b7a] Running
	I0127 15:40:45.046056 1074659 system_pods.go:61] "metrics-server-f79f97bbb-k7879" [137f45e8-cf1d-404b-af06-4b99a257450f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 15:40:45.046063 1074659 system_pods.go:61] "storage-provisioner" [8e874460-b5bf-4ce6-b1ca-9c188b1fd4e6] Running
	I0127 15:40:45.046074 1074659 system_pods.go:74] duration metric: took 132.642132ms to wait for pod list to return data ...
	I0127 15:40:45.046089 1074659 default_sa.go:34] waiting for default service account to be created ...
	I0127 15:40:45.246663 1074659 default_sa.go:45] found service account: "default"
	I0127 15:40:45.246694 1074659 default_sa.go:55] duration metric: took 200.600423ms for default service account to be created ...
	I0127 15:40:45.246707 1074659 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 15:40:45.449871 1074659 system_pods.go:87] 9 kube-system pods found
	I0127 15:40:43.646818 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:43.660154 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:43.660237 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:43.698517 1076050 cri.go:89] found id: ""
	I0127 15:40:43.698548 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.698557 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:43.698563 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:43.698624 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:43.736919 1076050 cri.go:89] found id: ""
	I0127 15:40:43.736954 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.736967 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:43.736978 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:43.737064 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:43.777333 1076050 cri.go:89] found id: ""
	I0127 15:40:43.777369 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.777382 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:43.777391 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:43.777462 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:43.817427 1076050 cri.go:89] found id: ""
	I0127 15:40:43.817460 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.817471 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:43.817480 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:43.817546 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:43.866498 1076050 cri.go:89] found id: ""
	I0127 15:40:43.866527 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.866538 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:43.866546 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:43.866616 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:43.919477 1076050 cri.go:89] found id: ""
	I0127 15:40:43.919510 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.919521 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:43.919530 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:43.919593 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:43.958203 1076050 cri.go:89] found id: ""
	I0127 15:40:43.958242 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.958261 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:43.958270 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:43.958340 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:43.996729 1076050 cri.go:89] found id: ""
	I0127 15:40:43.996760 1076050 logs.go:282] 0 containers: []
	W0127 15:40:43.996769 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:43.996779 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:43.996792 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:44.051707 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:44.051748 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:44.069643 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:44.069674 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:44.146464 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:44.146489 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:44.146505 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:44.230654 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:44.230696 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:46.788290 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:46.807855 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:46.807942 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:46.861569 1076050 cri.go:89] found id: ""
	I0127 15:40:46.861596 1076050 logs.go:282] 0 containers: []
	W0127 15:40:46.861608 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:46.861615 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:46.861684 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:46.919686 1076050 cri.go:89] found id: ""
	I0127 15:40:46.919719 1076050 logs.go:282] 0 containers: []
	W0127 15:40:46.919732 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:46.919741 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:46.919810 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:46.959359 1076050 cri.go:89] found id: ""
	I0127 15:40:46.959419 1076050 logs.go:282] 0 containers: []
	W0127 15:40:46.959432 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:46.959440 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:46.959503 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:47.000445 1076050 cri.go:89] found id: ""
	I0127 15:40:47.000489 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.000503 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:47.000512 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:47.000583 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:47.041395 1076050 cri.go:89] found id: ""
	I0127 15:40:47.041426 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.041440 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:47.041449 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:47.041512 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:47.086753 1076050 cri.go:89] found id: ""
	I0127 15:40:47.086787 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.086800 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:47.086808 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:47.086883 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:47.128760 1076050 cri.go:89] found id: ""
	I0127 15:40:47.128788 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.128799 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:47.128807 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:47.128876 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:47.173743 1076050 cri.go:89] found id: ""
	I0127 15:40:47.173779 1076050 logs.go:282] 0 containers: []
	W0127 15:40:47.173791 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:47.173804 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:47.173818 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:47.280755 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:47.280817 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:47.343245 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:47.343291 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:47.425229 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:47.425282 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:47.446605 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:47.446649 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:47.563807 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:45.456519 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:40:45.456939 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:40:45.457981 1074908 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:40:45.458002 1074908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 15:40:45.458020 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:40:45.460172 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.460862 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:40:45.460921 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.461259 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:40:45.461487 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:40:45.461715 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:40:45.461874 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:40:45.462195 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.462273 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:40:45.462309 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.462659 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:40:45.462819 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:40:45.462924 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:40:45.463019 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:40:45.464793 1074908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36397
	I0127 15:40:45.465301 1074908 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:40:45.465795 1074908 main.go:141] libmachine: Using API Version  1
	I0127 15:40:45.465815 1074908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:40:45.468906 1074908 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:40:45.469208 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetState
	I0127 15:40:45.471230 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .DriverName
	I0127 15:40:45.471522 1074908 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 15:40:45.471538 1074908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 15:40:45.471562 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHHostname
	I0127 15:40:45.474700 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.475171 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:3b:df", ip: ""} in network mk-embed-certs-349782: {Iface:virbr3 ExpiryTime:2025-01-27 16:35:25 +0000 UTC Type:0 Mac:52:54:00:47:3b:df Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:embed-certs-349782 Clientid:01:52:54:00:47:3b:df}
	I0127 15:40:45.475203 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | domain embed-certs-349782 has defined IP address 192.168.61.43 and MAC address 52:54:00:47:3b:df in network mk-embed-certs-349782
	I0127 15:40:45.475388 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHPort
	I0127 15:40:45.475596 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHKeyPath
	I0127 15:40:45.475722 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .GetSSHUsername
	I0127 15:40:45.475899 1074908 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/embed-certs-349782/id_rsa Username:docker}
	I0127 15:40:45.617662 1074908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:40:45.639438 1074908 node_ready.go:35] waiting up to 6m0s for node "embed-certs-349782" to be "Ready" ...
	I0127 15:40:45.668405 1074908 node_ready.go:49] node "embed-certs-349782" has status "Ready":"True"
	I0127 15:40:45.668432 1074908 node_ready.go:38] duration metric: took 28.956722ms for node "embed-certs-349782" to be "Ready" ...
	I0127 15:40:45.668451 1074908 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:45.676760 1074908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:45.743936 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 15:40:45.743967 1074908 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 15:40:45.755731 1074908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:40:45.759201 1074908 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 15:40:45.759233 1074908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 15:40:45.772228 1074908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 15:40:45.805739 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 15:40:45.805773 1074908 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 15:40:45.823459 1074908 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 15:40:45.823500 1074908 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 15:40:45.854823 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 15:40:45.854859 1074908 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 15:40:45.891284 1074908 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:40:45.891327 1074908 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 15:40:45.931396 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 15:40:45.931431 1074908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 15:40:46.015320 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 15:40:46.015360 1074908 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 15:40:46.015364 1074908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:40:46.083527 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 15:40:46.083563 1074908 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 15:40:46.246566 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 15:40:46.246597 1074908 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 15:40:46.376290 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 15:40:46.376329 1074908 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 15:40:46.427597 1074908 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:40:46.427631 1074908 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 15:40:46.482003 1074908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:40:47.410166 1074908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.637893772s)
	I0127 15:40:47.410259 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.410166 1074908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.654370109s)
	I0127 15:40:47.410282 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.410349 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.410372 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.410843 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.410875 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.412611 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.412628 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.412638 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.412646 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.412761 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.412798 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.412830 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.412850 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.412903 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.413172 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.413266 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.413342 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.414418 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.414437 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.474683 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.474722 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.475077 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.475151 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.475172 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.777164 1074908 pod_ready.go:103] pod "etcd-embed-certs-349782" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:47.977107 1074908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.961691521s)
	I0127 15:40:47.977187 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.977203 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.977515 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:47.977556 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.977595 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.977608 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:47.977619 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:47.977883 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:47.977933 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:47.977955 1074908 addons.go:479] Verifying addon metrics-server=true in "embed-certs-349782"
	I0127 15:40:47.977965 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:49.266293 1074908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.7842336s)
	I0127 15:40:49.266371 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:49.266386 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:49.266731 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:49.266754 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:49.266771 1074908 main.go:141] libmachine: Making call to close driver server
	I0127 15:40:49.266779 1074908 main.go:141] libmachine: (embed-certs-349782) Calling .Close
	I0127 15:40:49.267033 1074908 main.go:141] libmachine: (embed-certs-349782) DBG | Closing plugin on server side
	I0127 15:40:49.267086 1074908 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:40:49.267106 1074908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:40:49.268778 1074908 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-349782 addons enable metrics-server
	
	I0127 15:40:49.270188 1074908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 15:40:52.460023 1075160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.936714261s)
	I0127 15:40:52.460128 1075160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:40:52.476845 1075160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:40:52.487966 1075160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:40:52.499961 1075160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:40:52.499988 1075160 kubeadm.go:157] found existing configuration files:
	
	I0127 15:40:52.500037 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0127 15:40:52.511034 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:40:52.511115 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:40:52.524517 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0127 15:40:52.534966 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:40:52.535048 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:40:52.545245 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0127 15:40:52.555070 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:40:52.555149 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:40:52.569605 1075160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0127 15:40:52.581711 1075160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:40:52.581794 1075160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:40:52.592228 1075160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:40:52.654498 1075160 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 15:40:52.654647 1075160 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:40:52.779741 1075160 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:40:52.779912 1075160 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:40:52.780069 1075160 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 15:40:52.790096 1075160 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:40:50.064460 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:50.080142 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:50.080219 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:50.120604 1076050 cri.go:89] found id: ""
	I0127 15:40:50.120643 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.120655 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:50.120661 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:50.120716 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:50.161728 1076050 cri.go:89] found id: ""
	I0127 15:40:50.161766 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.161777 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:50.161785 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:50.161851 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:50.199247 1076050 cri.go:89] found id: ""
	I0127 15:40:50.199275 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.199286 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:50.199293 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:50.199369 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:50.246623 1076050 cri.go:89] found id: ""
	I0127 15:40:50.246652 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.246663 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:50.246672 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:50.246742 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:50.284077 1076050 cri.go:89] found id: ""
	I0127 15:40:50.284111 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.284123 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:50.284132 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:50.284200 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:50.326481 1076050 cri.go:89] found id: ""
	I0127 15:40:50.326518 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.326530 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:50.326539 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:50.326597 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:50.364165 1076050 cri.go:89] found id: ""
	I0127 15:40:50.364198 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.364210 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:50.364218 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:50.364280 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:50.402527 1076050 cri.go:89] found id: ""
	I0127 15:40:50.402560 1076050 logs.go:282] 0 containers: []
	W0127 15:40:50.402572 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:50.402586 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:50.402602 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:50.485370 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:50.485412 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:50.539508 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:50.539547 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:50.591618 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:50.591656 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:50.609824 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:50.609873 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:50.694094 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:53.194813 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:53.211192 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:53.211271 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:53.258010 1076050 cri.go:89] found id: ""
	I0127 15:40:53.258042 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.258060 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:53.258069 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:53.258138 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:53.297402 1076050 cri.go:89] found id: ""
	I0127 15:40:53.297430 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.297440 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:53.297448 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:53.297511 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:53.336412 1076050 cri.go:89] found id: ""
	I0127 15:40:53.336440 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.336450 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:53.336457 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:53.336526 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:53.383904 1076050 cri.go:89] found id: ""
	I0127 15:40:53.383939 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.383950 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:53.383959 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:53.384031 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:53.435476 1076050 cri.go:89] found id: ""
	I0127 15:40:53.435512 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.435525 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:53.435533 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:53.435604 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:49.271495 1074908 addons.go:514] duration metric: took 3.880366443s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 15:40:50.196894 1074908 pod_ready.go:103] pod "etcd-embed-certs-349782" in "kube-system" namespace has status "Ready":"False"
	I0127 15:40:51.684593 1074908 pod_ready.go:93] pod "etcd-embed-certs-349782" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:51.684619 1074908 pod_ready.go:82] duration metric: took 6.007831808s for pod "etcd-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:51.684632 1074908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:51.693065 1074908 pod_ready.go:93] pod "kube-apiserver-embed-certs-349782" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:51.693095 1074908 pod_ready.go:82] duration metric: took 8.4536ms for pod "kube-apiserver-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:51.693110 1074908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:52.703593 1074908 pod_ready.go:93] pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:52.703626 1074908 pod_ready.go:82] duration metric: took 1.010507584s for pod "kube-controller-manager-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:52.703641 1074908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:53.710652 1074908 pod_ready.go:93] pod "kube-scheduler-embed-certs-349782" in "kube-system" namespace has status "Ready":"True"
	I0127 15:40:53.710683 1074908 pod_ready.go:82] duration metric: took 1.007031836s for pod "kube-scheduler-embed-certs-349782" in "kube-system" namespace to be "Ready" ...
	I0127 15:40:53.710695 1074908 pod_ready.go:39] duration metric: took 8.042232456s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:40:53.710716 1074908 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:40:53.710780 1074908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:53.771554 1074908 api_server.go:72] duration metric: took 8.380427434s to wait for apiserver process to appear ...
	I0127 15:40:53.771585 1074908 api_server.go:88] waiting for apiserver healthz status ...
	I0127 15:40:53.771611 1074908 api_server.go:253] Checking apiserver healthz at https://192.168.61.43:8443/healthz ...
	I0127 15:40:53.779085 1074908 api_server.go:279] https://192.168.61.43:8443/healthz returned 200:
	ok
	I0127 15:40:53.780297 1074908 api_server.go:141] control plane version: v1.32.1
	I0127 15:40:53.780325 1074908 api_server.go:131] duration metric: took 8.731633ms to wait for apiserver health ...
	I0127 15:40:53.780335 1074908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 15:40:53.788343 1074908 system_pods.go:59] 9 kube-system pods found
	I0127 15:40:53.788373 1074908 system_pods.go:61] "coredns-668d6bf9bc-2ggkc" [ae4bf072-7cfb-4a26-8c71-abd3cbc52c28] Running
	I0127 15:40:53.788380 1074908 system_pods.go:61] "coredns-668d6bf9bc-h92kp" [5c29333b-4ea9-44fa-8be6-c350e6b709fe] Running
	I0127 15:40:53.788384 1074908 system_pods.go:61] "etcd-embed-certs-349782" [fcb552ae-bb9e-49de-a183-a26f8cac7e56] Running
	I0127 15:40:53.788388 1074908 system_pods.go:61] "kube-apiserver-embed-certs-349782" [5161cdd2-9cea-4b6d-9023-b20f56e14d9c] Running
	I0127 15:40:53.788392 1074908 system_pods.go:61] "kube-controller-manager-embed-certs-349782" [defbaf3b-e25a-4e20-a602-4be47bd2cc4b] Running
	I0127 15:40:53.788395 1074908 system_pods.go:61] "kube-proxy-vhpzl" [1bb477a3-24b0-4a0e-9bf1-ce5794d2cdbf] Running
	I0127 15:40:53.788398 1074908 system_pods.go:61] "kube-scheduler-embed-certs-349782" [ed785153-6f53-4289-a191-5545960c300f] Running
	I0127 15:40:53.788404 1074908 system_pods.go:61] "metrics-server-f79f97bbb-pnbcx" [af453586-d131-4ba7-aa9f-290eb044d58e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 15:40:53.788411 1074908 system_pods.go:61] "storage-provisioner" [e5c6e59a-52ab-4707-a438-5d01890928db] Running
	I0127 15:40:53.788422 1074908 system_pods.go:74] duration metric: took 8.079129ms to wait for pod list to return data ...
	I0127 15:40:53.788430 1074908 default_sa.go:34] waiting for default service account to be created ...
	I0127 15:40:52.793113 1075160 out.go:235]   - Generating certificates and keys ...
	I0127 15:40:52.793243 1075160 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:40:52.793339 1075160 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:40:52.793480 1075160 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:40:52.793582 1075160 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:40:52.793692 1075160 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:40:52.793783 1075160 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:40:52.793875 1075160 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:40:52.793966 1075160 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:40:52.794100 1075160 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:40:52.794204 1075160 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:40:52.794273 1075160 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:40:52.794363 1075160 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:40:52.989346 1075160 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:40:53.518286 1075160 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 15:40:53.684220 1075160 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:40:53.833269 1075160 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:40:53.959433 1075160 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:40:53.959944 1075160 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:40:53.962645 1075160 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:40:53.964848 1075160 out.go:235]   - Booting up control plane ...
	I0127 15:40:53.964986 1075160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:40:53.965139 1075160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:40:53.967441 1075160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:40:53.990143 1075160 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:40:53.997601 1075160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:40:53.997684 1075160 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:40:54.175814 1075160 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 15:40:54.175985 1075160 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 15:40:54.677251 1075160 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.539769ms
	I0127 15:40:54.677364 1075160 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 15:40:53.477359 1076050 cri.go:89] found id: ""
	I0127 15:40:53.477389 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.477400 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:53.477408 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:53.477473 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:53.522739 1076050 cri.go:89] found id: ""
	I0127 15:40:53.522777 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.522789 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:53.522798 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:53.522870 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:53.591524 1076050 cri.go:89] found id: ""
	I0127 15:40:53.591556 1076050 logs.go:282] 0 containers: []
	W0127 15:40:53.591568 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:53.591581 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:53.591601 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:53.645459 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:53.645495 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:53.662522 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:53.662551 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:53.743915 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:53.743940 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:53.743957 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:53.844477 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:53.844511 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:56.390836 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:56.404803 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:56.404892 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:56.448556 1076050 cri.go:89] found id: ""
	I0127 15:40:56.448586 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.448597 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:56.448606 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:56.448674 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:56.501798 1076050 cri.go:89] found id: ""
	I0127 15:40:56.501833 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.501854 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:56.501863 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:56.501932 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:56.549831 1076050 cri.go:89] found id: ""
	I0127 15:40:56.549882 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.549895 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:56.549904 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:56.549976 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:56.604199 1076050 cri.go:89] found id: ""
	I0127 15:40:56.604236 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.604248 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:56.604258 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:56.604361 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:56.662492 1076050 cri.go:89] found id: ""
	I0127 15:40:56.662529 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.662540 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:56.662550 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:56.662621 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:56.712694 1076050 cri.go:89] found id: ""
	I0127 15:40:56.712731 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.712743 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:56.712752 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:56.712821 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:56.759321 1076050 cri.go:89] found id: ""
	I0127 15:40:56.759355 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.759366 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:56.759375 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:56.759441 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:40:56.806457 1076050 cri.go:89] found id: ""
	I0127 15:40:56.806487 1076050 logs.go:282] 0 containers: []
	W0127 15:40:56.806499 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:40:56.806511 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:40:56.806528 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:40:56.885361 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:40:56.885416 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:40:56.904333 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:40:56.904390 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:40:57.003794 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:40:57.003820 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:40:57.003845 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:40:57.107181 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:40:57.107240 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:40:53.791640 1074908 default_sa.go:45] found service account: "default"
	I0127 15:40:53.791671 1074908 default_sa.go:55] duration metric: took 3.229036ms for default service account to be created ...
	I0127 15:40:53.791682 1074908 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 15:40:53.798897 1074908 system_pods.go:87] 9 kube-system pods found
	I0127 15:41:00.679789 1075160 kubeadm.go:310] [api-check] The API server is healthy after 6.002206079s
	I0127 15:41:00.695507 1075160 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 15:41:00.712356 1075160 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 15:41:00.738343 1075160 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 15:41:00.738640 1075160 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-912913 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 15:41:00.753238 1075160 kubeadm.go:310] [bootstrap-token] Using token: 5gsmwo.93b5mx0ng9gboctz
	I0127 15:41:00.754589 1075160 out.go:235]   - Configuring RBAC rules ...
	I0127 15:41:00.754718 1075160 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 15:41:00.773508 1075160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 15:41:00.781170 1075160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 15:41:00.784358 1075160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 15:41:00.787629 1075160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 15:41:00.790904 1075160 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 15:41:01.087298 1075160 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 15:41:01.539193 1075160 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 15:41:02.088850 1075160 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 15:41:02.089949 1075160 kubeadm.go:310] 
	I0127 15:41:02.090088 1075160 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 15:41:02.090112 1075160 kubeadm.go:310] 
	I0127 15:41:02.090212 1075160 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 15:41:02.090222 1075160 kubeadm.go:310] 
	I0127 15:41:02.090256 1075160 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 15:41:02.090363 1075160 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 15:41:02.090438 1075160 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 15:41:02.090447 1075160 kubeadm.go:310] 
	I0127 15:41:02.090529 1075160 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 15:41:02.090542 1075160 kubeadm.go:310] 
	I0127 15:41:02.090605 1075160 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 15:41:02.090612 1075160 kubeadm.go:310] 
	I0127 15:41:02.090674 1075160 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 15:41:02.090813 1075160 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 15:41:02.090903 1075160 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 15:41:02.090913 1075160 kubeadm.go:310] 
	I0127 15:41:02.091020 1075160 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 15:41:02.091116 1075160 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 15:41:02.091126 1075160 kubeadm.go:310] 
	I0127 15:41:02.091223 1075160 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 5gsmwo.93b5mx0ng9gboctz \
	I0127 15:41:02.091357 1075160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 \
	I0127 15:41:02.091383 1075160 kubeadm.go:310] 	--control-plane 
	I0127 15:41:02.091393 1075160 kubeadm.go:310] 
	I0127 15:41:02.091482 1075160 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 15:41:02.091490 1075160 kubeadm.go:310] 
	I0127 15:41:02.091576 1075160 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 5gsmwo.93b5mx0ng9gboctz \
	I0127 15:41:02.091686 1075160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1644c54c401e0b212a4fb54cb4a1cfb2ad068dc5ffe5f28d20b99797f9a46a27 
	I0127 15:41:02.093055 1075160 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:41:02.093120 1075160 cni.go:84] Creating CNI manager for ""
	I0127 15:41:02.093134 1075160 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 15:41:02.095065 1075160 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 15:41:02.096511 1075160 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 15:41:02.110508 1075160 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 15:41:02.132628 1075160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 15:41:02.132723 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:02.132745 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-912913 minikube.k8s.io/updated_at=2025_01_27T15_41_02_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743 minikube.k8s.io/name=default-k8s-diff-port-912913 minikube.k8s.io/primary=true
	I0127 15:41:02.380721 1075160 ops.go:34] apiserver oom_adj: -16
	I0127 15:41:02.380856 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:40:59.656976 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:40:59.675626 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:40:59.675762 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:40:59.719313 1076050 cri.go:89] found id: ""
	I0127 15:40:59.719343 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.719351 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:40:59.719357 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:40:59.719441 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:40:59.758380 1076050 cri.go:89] found id: ""
	I0127 15:40:59.758419 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.758433 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:40:59.758441 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:40:59.758511 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:40:59.802754 1076050 cri.go:89] found id: ""
	I0127 15:40:59.802787 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.802798 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:40:59.802806 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:40:59.802874 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:40:59.847665 1076050 cri.go:89] found id: ""
	I0127 15:40:59.847695 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.847707 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:40:59.847716 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:40:59.847781 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:40:59.888840 1076050 cri.go:89] found id: ""
	I0127 15:40:59.888867 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.888875 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:40:59.888882 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:40:59.888946 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:40:59.935416 1076050 cri.go:89] found id: ""
	I0127 15:40:59.935448 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.935460 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:40:59.935468 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:40:59.935544 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:40:59.982418 1076050 cri.go:89] found id: ""
	I0127 15:40:59.982448 1076050 logs.go:282] 0 containers: []
	W0127 15:40:59.982456 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:40:59.982464 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:40:59.982539 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:00.024752 1076050 cri.go:89] found id: ""
	I0127 15:41:00.024794 1076050 logs.go:282] 0 containers: []
	W0127 15:41:00.024806 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:00.024820 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:00.024839 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:00.044330 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:00.044369 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:00.130115 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:00.130216 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:00.130241 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:00.236534 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:00.236585 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:00.312265 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:00.312307 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:02.873155 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:02.889623 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:02.889689 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:02.931491 1076050 cri.go:89] found id: ""
	I0127 15:41:02.931528 1076050 logs.go:282] 0 containers: []
	W0127 15:41:02.931537 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:02.931546 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:02.931615 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:02.968872 1076050 cri.go:89] found id: ""
	I0127 15:41:02.968912 1076050 logs.go:282] 0 containers: []
	W0127 15:41:02.968924 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:02.968932 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:02.969030 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:03.004397 1076050 cri.go:89] found id: ""
	I0127 15:41:03.004428 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.004437 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:03.004443 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:03.004498 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:03.042909 1076050 cri.go:89] found id: ""
	I0127 15:41:03.042937 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.042948 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:03.042955 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:03.043020 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:03.081525 1076050 cri.go:89] found id: ""
	I0127 15:41:03.081556 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.081567 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:03.081576 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:03.081645 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:03.122741 1076050 cri.go:89] found id: ""
	I0127 15:41:03.122773 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.122784 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:03.122793 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:03.122855 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:03.159043 1076050 cri.go:89] found id: ""
	I0127 15:41:03.159069 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.159077 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:03.159090 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:03.159140 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:03.200367 1076050 cri.go:89] found id: ""
	I0127 15:41:03.200402 1076050 logs.go:282] 0 containers: []
	W0127 15:41:03.200414 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:03.200429 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:03.200447 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:03.291239 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:03.291291 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:03.336057 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:03.336098 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:03.395428 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:03.395480 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:03.411878 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:03.411911 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 15:41:02.881961 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:03.381153 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:03.881177 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:04.381381 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:04.881601 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:05.381394 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:05.881197 1075160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 15:41:05.963844 1075160 kubeadm.go:1113] duration metric: took 3.831201657s to wait for elevateKubeSystemPrivileges
	I0127 15:41:05.963884 1075160 kubeadm.go:394] duration metric: took 5m3.006407652s to StartCluster
	I0127 15:41:05.963905 1075160 settings.go:142] acquiring lock: {Name:mk76722a8563186e0a733747d866232f054026c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:41:05.964014 1075160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:41:05.966708 1075160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/kubeconfig: {Name:mkbbac5bba6d4c0b16bfeec72a266ac615d65111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 15:41:05.967090 1075160 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.160 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 15:41:05.967165 1075160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 15:41:05.967282 1075160 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-912913"
	I0127 15:41:05.967302 1075160 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-912913"
	W0127 15:41:05.967308 1075160 addons.go:247] addon storage-provisioner should already be in state true
	I0127 15:41:05.967326 1075160 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-912913"
	I0127 15:41:05.967343 1075160 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-912913"
	I0127 15:41:05.967355 1075160 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-912913"
	I0127 15:41:05.967358 1075160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-912913"
	I0127 15:41:05.967357 1075160 config.go:182] Loaded profile config "default-k8s-diff-port-912913": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:41:05.967356 1075160 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-912913"
	I0127 15:41:05.967381 1075160 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-912913"
	W0127 15:41:05.967390 1075160 addons.go:247] addon dashboard should already be in state true
	W0127 15:41:05.967362 1075160 addons.go:247] addon metrics-server should already be in state true
	I0127 15:41:05.967334 1075160 host.go:66] Checking if "default-k8s-diff-port-912913" exists ...
	I0127 15:41:05.967433 1075160 host.go:66] Checking if "default-k8s-diff-port-912913" exists ...
	I0127 15:41:05.967433 1075160 host.go:66] Checking if "default-k8s-diff-port-912913" exists ...
	I0127 15:41:05.967803 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.967829 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.967842 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.967854 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.967866 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.967894 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.967857 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.967954 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.968953 1075160 out.go:177] * Verifying Kubernetes components...
	I0127 15:41:05.970726 1075160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 15:41:05.986076 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37879
	I0127 15:41:05.986613 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:05.987340 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:05.987367 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:05.987696 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40949
	I0127 15:41:05.987879 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35511
	I0127 15:41:05.987883 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45871
	I0127 15:41:05.987924 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:05.988072 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:05.988235 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:05.988485 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:05.988597 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.988641 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.988725 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:05.988745 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:05.988760 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:05.988775 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:05.989142 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:05.989164 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:05.989172 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:05.989192 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:05.989534 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:05.989721 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:05.989770 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.989789 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.989815 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.989827 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:05.993646 1075160 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-912913"
	W0127 15:41:05.993672 1075160 addons.go:247] addon default-storageclass should already be in state true
	I0127 15:41:05.993703 1075160 host.go:66] Checking if "default-k8s-diff-port-912913" exists ...
	I0127 15:41:05.994089 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:05.994137 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:06.007391 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36511
	I0127 15:41:06.007784 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39871
	I0127 15:41:06.008229 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.008327 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.008859 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.008880 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.008951 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I0127 15:41:06.009182 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.009201 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.009660 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.009740 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.009876 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:06.010328 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.010393 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.010588 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.010748 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:06.010833 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.025187 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:06.025199 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:41:06.025187 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:41:06.025186 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46703
	I0127 15:41:06.037186 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:41:06.037801 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.038419 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.038439 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.038833 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.039733 1075160 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 15:41:06.039865 1075160 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 15:41:06.039911 1075160 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:41:06.039947 1075160 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 15:41:06.039975 1075160 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:41:06.041831 1075160 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 15:41:06.041853 1075160 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 15:41:06.041887 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:41:06.042817 1075160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:41:06.042833 1075160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 15:41:06.042854 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:41:06.045474 1075160 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 15:41:06.047233 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.047253 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 15:41:06.047270 1075160 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 15:41:06.047294 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:41:06.047965 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:41:06.048037 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.048421 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:41:06.048675 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:41:06.049034 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:41:06.049616 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:41:06.051299 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.051321 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.051717 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:41:06.051739 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.052033 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:41:06.052054 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.052088 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:41:06.052323 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:41:06.052372 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:41:06.052526 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:41:06.052702 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:41:06.057244 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:41:06.057489 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:41:06.057880 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:41:06.058959 1075160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39803
	I0127 15:41:06.059421 1075160 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:41:06.059854 1075160 main.go:141] libmachine: Using API Version  1
	I0127 15:41:06.059866 1075160 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:41:06.060259 1075160 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:41:06.060421 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetState
	I0127 15:41:06.062233 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .DriverName
	I0127 15:41:06.062753 1075160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 15:41:06.062767 1075160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 15:41:06.062781 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHHostname
	I0127 15:41:06.067605 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.068014 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:e7:ab", ip: ""} in network mk-default-k8s-diff-port-912913: {Iface:virbr2 ExpiryTime:2025-01-27 16:35:48 +0000 UTC Type:0 Mac:52:54:00:04:e7:ab Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:default-k8s-diff-port-912913 Clientid:01:52:54:00:04:e7:ab}
	I0127 15:41:06.068027 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | domain default-k8s-diff-port-912913 has defined IP address 192.168.39.160 and MAC address 52:54:00:04:e7:ab in network mk-default-k8s-diff-port-912913
	I0127 15:41:06.068243 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHPort
	I0127 15:41:06.068368 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHKeyPath
	I0127 15:41:06.068559 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .GetSSHUsername
	I0127 15:41:06.068695 1075160 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/default-k8s-diff-port-912913/id_rsa Username:docker}
	I0127 15:41:06.211887 1075160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 15:41:06.257549 1075160 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-912913" to be "Ready" ...
	I0127 15:41:06.305423 1075160 node_ready.go:49] node "default-k8s-diff-port-912913" has status "Ready":"True"
	I0127 15:41:06.305459 1075160 node_ready.go:38] duration metric: took 47.864404ms for node "default-k8s-diff-port-912913" to be "Ready" ...
	I0127 15:41:06.305474 1075160 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:41:06.311746 1075160 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 15:41:06.311780 1075160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 15:41:06.329198 1075160 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:06.374086 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 15:41:06.374119 1075160 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 15:41:06.377742 1075160 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 15:41:06.377771 1075160 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 15:41:06.400332 1075160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 15:41:06.403004 1075160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 15:41:06.430195 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 15:41:06.430217 1075160 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 15:41:06.487574 1075160 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:41:06.487605 1075160 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 15:41:06.529999 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 15:41:06.530054 1075160 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 15:41:06.609758 1075160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 15:41:06.619520 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 15:41:06.619567 1075160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 15:41:06.795826 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 15:41:06.795870 1075160 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 15:41:06.889910 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 15:41:06.889940 1075160 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 15:41:06.979355 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 15:41:06.979391 1075160 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 15:41:07.053404 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 15:41:07.053438 1075160 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 15:41:07.101199 1075160 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:41:07.101235 1075160 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 15:41:07.165859 1075160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 15:41:07.419725 1075160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.016680012s)
	I0127 15:41:07.419820 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.419839 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.419841 1075160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.019463574s)
	I0127 15:41:07.419916 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.419939 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.420292 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.420306 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.420322 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.420352 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.420365 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.420366 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.420492 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.420521 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.420530 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.420538 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.420775 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.420779 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.420786 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.420814 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.420842 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.420849 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.438640 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.438681 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.439056 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.439081 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.439091 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	W0127 15:41:03.498183 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:06.000178 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:06.024915 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:06.024973 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:06.098332 1076050 cri.go:89] found id: ""
	I0127 15:41:06.098361 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.098369 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:06.098375 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:06.098430 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:06.156082 1076050 cri.go:89] found id: ""
	I0127 15:41:06.156117 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.156129 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:06.156137 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:06.156203 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:06.217204 1076050 cri.go:89] found id: ""
	I0127 15:41:06.217235 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.217246 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:06.217255 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:06.217331 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:06.259003 1076050 cri.go:89] found id: ""
	I0127 15:41:06.259029 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.259041 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:06.259048 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:06.259123 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:06.298292 1076050 cri.go:89] found id: ""
	I0127 15:41:06.298330 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.298341 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:06.298349 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:06.298416 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:06.339173 1076050 cri.go:89] found id: ""
	I0127 15:41:06.339211 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.339224 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:06.339234 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:06.339309 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:06.381271 1076050 cri.go:89] found id: ""
	I0127 15:41:06.381300 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.381311 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:06.381320 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:06.381385 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:06.429073 1076050 cri.go:89] found id: ""
	I0127 15:41:06.429134 1076050 logs.go:282] 0 containers: []
	W0127 15:41:06.429149 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:06.429164 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:06.429187 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:06.491509 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:06.491545 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:06.507964 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:06.508011 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:06.589122 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:06.589158 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:06.589173 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:06.668992 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:06.669051 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:07.791715 1075160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.18189835s)
	I0127 15:41:07.791796 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.791813 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.792148 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.792170 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.792181 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:07.792190 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:07.792522 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) DBG | Closing plugin on server side
	I0127 15:41:07.792570 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:07.792580 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:07.792591 1075160 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-912913"
	I0127 15:41:08.375027 1075160 pod_ready.go:103] pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"False"
	I0127 15:41:08.535318 1075160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.369395363s)
	I0127 15:41:08.535382 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:08.535398 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:08.535779 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:08.535833 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:08.535847 1075160 main.go:141] libmachine: Making call to close driver server
	I0127 15:41:08.535857 1075160 main.go:141] libmachine: (default-k8s-diff-port-912913) Calling .Close
	I0127 15:41:08.536129 1075160 main.go:141] libmachine: Successfully made call to close driver server
	I0127 15:41:08.536152 1075160 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 15:41:08.537800 1075160 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-912913 addons enable metrics-server
	
	I0127 15:41:08.539323 1075160 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 15:41:08.540713 1075160 addons.go:514] duration metric: took 2.57355558s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 15:41:10.869256 1075160 pod_ready.go:103] pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"False"
	I0127 15:41:09.224594 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:09.239525 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:09.239616 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:09.285116 1076050 cri.go:89] found id: ""
	I0127 15:41:09.285160 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.285172 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:09.285182 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:09.285252 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:09.342278 1076050 cri.go:89] found id: ""
	I0127 15:41:09.342307 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.342323 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:09.342332 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:09.342397 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:09.385479 1076050 cri.go:89] found id: ""
	I0127 15:41:09.385506 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.385515 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:09.385521 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:09.385580 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:09.426386 1076050 cri.go:89] found id: ""
	I0127 15:41:09.426426 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.426439 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:09.426448 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:09.426516 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:09.468739 1076050 cri.go:89] found id: ""
	I0127 15:41:09.468776 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.468789 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:09.468798 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:09.468866 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:09.510885 1076050 cri.go:89] found id: ""
	I0127 15:41:09.510918 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.510931 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:09.510939 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:09.511007 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:09.548406 1076050 cri.go:89] found id: ""
	I0127 15:41:09.548442 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.548455 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:09.548464 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:09.548547 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:09.589727 1076050 cri.go:89] found id: ""
	I0127 15:41:09.589761 1076050 logs.go:282] 0 containers: []
	W0127 15:41:09.589773 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:09.589786 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:09.589802 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:09.641717 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:09.641759 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:09.712152 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:09.712220 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:09.730069 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:09.730119 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:09.808412 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:09.808447 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:09.808462 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:12.421654 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:12.440156 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:12.440298 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:12.489759 1076050 cri.go:89] found id: ""
	I0127 15:41:12.489788 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.489800 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:12.489809 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:12.489887 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:12.540068 1076050 cri.go:89] found id: ""
	I0127 15:41:12.540099 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.540108 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:12.540114 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:12.540178 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:12.587471 1076050 cri.go:89] found id: ""
	I0127 15:41:12.587497 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.587505 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:12.587511 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:12.587578 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:12.638634 1076050 cri.go:89] found id: ""
	I0127 15:41:12.638668 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.638680 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:12.638689 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:12.638762 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:12.683784 1076050 cri.go:89] found id: ""
	I0127 15:41:12.683815 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.683826 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:12.683837 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:12.683900 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:12.720438 1076050 cri.go:89] found id: ""
	I0127 15:41:12.720479 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.720488 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:12.720495 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:12.720548 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:12.759175 1076050 cri.go:89] found id: ""
	I0127 15:41:12.759207 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.759219 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:12.759226 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:12.759290 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:12.792624 1076050 cri.go:89] found id: ""
	I0127 15:41:12.792656 1076050 logs.go:282] 0 containers: []
	W0127 15:41:12.792668 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:12.792681 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:12.792697 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:12.878341 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:12.878386 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:12.926986 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:12.927028 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:12.982133 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:12.982172 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:12.999460 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:12.999503 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:13.087892 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:13.336050 1075160 pod_ready.go:103] pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"False"
	I0127 15:41:15.338501 1075160 pod_ready.go:93] pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:41:15.338533 1075160 pod_ready.go:82] duration metric: took 9.009294324s for pod "etcd-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.338546 1075160 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.343866 1075160 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:41:15.343889 1075160 pod_ready.go:82] duration metric: took 5.336104ms for pod "kube-apiserver-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.343898 1075160 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.349389 1075160 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:41:15.349413 1075160 pod_ready.go:82] duration metric: took 5.508752ms for pod "kube-controller-manager-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.349422 1075160 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.355144 1075160 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-912913" in "kube-system" namespace has status "Ready":"True"
	I0127 15:41:15.355166 1075160 pod_ready.go:82] duration metric: took 5.737289ms for pod "kube-scheduler-default-k8s-diff-port-912913" in "kube-system" namespace to be "Ready" ...
	I0127 15:41:15.355173 1075160 pod_ready.go:39] duration metric: took 9.049686447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 15:41:15.355191 1075160 api_server.go:52] waiting for apiserver process to appear ...
	I0127 15:41:15.355243 1075160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:15.370942 1075160 api_server.go:72] duration metric: took 9.403809848s to wait for apiserver process to appear ...
	I0127 15:41:15.370967 1075160 api_server.go:88] waiting for apiserver healthz status ...
	I0127 15:41:15.370986 1075160 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8444/healthz ...
	I0127 15:41:15.378733 1075160 api_server.go:279] https://192.168.39.160:8444/healthz returned 200:
	ok
	I0127 15:41:15.380614 1075160 api_server.go:141] control plane version: v1.32.1
	I0127 15:41:15.380640 1075160 api_server.go:131] duration metric: took 9.666454ms to wait for apiserver health ...
	I0127 15:41:15.380649 1075160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 15:41:15.390107 1075160 system_pods.go:59] 9 kube-system pods found
	I0127 15:41:15.390141 1075160 system_pods.go:61] "coredns-668d6bf9bc-8rzrt" [92e346ae-cc28-4f80-9424-c4d97ac8106c] Running
	I0127 15:41:15.390147 1075160 system_pods.go:61] "coredns-668d6bf9bc-zw9rm" [c29a853d-5146-4641-a434-d85147dc3b16] Running
	I0127 15:41:15.390151 1075160 system_pods.go:61] "etcd-default-k8s-diff-port-912913" [4eb15463-b135-4347-9c0b-ff5cd9fa0991] Running
	I0127 15:41:15.390155 1075160 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-912913" [f1d151d9-bd66-41f1-b2e8-bb495f8a3522] Running
	I0127 15:41:15.390159 1075160 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-912913" [da81a47f-a89e-4daa-828c-e1dc1458067c] Running
	I0127 15:41:15.390161 1075160 system_pods.go:61] "kube-proxy-k85rn" [8da8dc48-3019-4fa6-b5c4-58b0b41aefc0] Running
	I0127 15:41:15.390165 1075160 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-912913" [9042c262-515d-40d9-9d99-fda8f49b141a] Running
	I0127 15:41:15.390170 1075160 system_pods.go:61] "metrics-server-f79f97bbb-rtx6b" [aed61473-0cc8-4459-9153-5c42e5a10b2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 15:41:15.390174 1075160 system_pods.go:61] "storage-provisioner" [5fa7b229-cd7d-4aa4-9cee-26a1c5714b3c] Running
	I0127 15:41:15.390184 1075160 system_pods.go:74] duration metric: took 9.526361ms to wait for pod list to return data ...
	I0127 15:41:15.390193 1075160 default_sa.go:34] waiting for default service account to be created ...
	I0127 15:41:15.394345 1075160 default_sa.go:45] found service account: "default"
	I0127 15:41:15.394371 1075160 default_sa.go:55] duration metric: took 4.169137ms for default service account to be created ...
	I0127 15:41:15.394380 1075160 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 15:41:15.537654 1075160 system_pods.go:87] 9 kube-system pods found
	I0127 15:41:15.589166 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:15.607749 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:15.607824 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:15.655722 1076050 cri.go:89] found id: ""
	I0127 15:41:15.655752 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.655764 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:15.655773 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:15.655847 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:15.703202 1076050 cri.go:89] found id: ""
	I0127 15:41:15.703235 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.703248 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:15.703256 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:15.703360 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:15.747335 1076050 cri.go:89] found id: ""
	I0127 15:41:15.747371 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.747383 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:15.747400 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:15.747470 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:15.786207 1076050 cri.go:89] found id: ""
	I0127 15:41:15.786245 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.786259 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:15.786269 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:15.786351 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:15.826251 1076050 cri.go:89] found id: ""
	I0127 15:41:15.826286 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.826298 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:15.826306 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:15.826435 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:15.873134 1076050 cri.go:89] found id: ""
	I0127 15:41:15.873167 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.873187 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:15.873195 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:15.873267 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:15.923221 1076050 cri.go:89] found id: ""
	I0127 15:41:15.923273 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.923286 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:15.923294 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:15.923364 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:15.967245 1076050 cri.go:89] found id: ""
	I0127 15:41:15.967282 1076050 logs.go:282] 0 containers: []
	W0127 15:41:15.967295 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:15.967309 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:15.967325 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:16.057675 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:16.057706 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:16.057722 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:16.141133 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:16.141181 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:16.186832 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:16.186869 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:16.255430 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:16.255473 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:18.774206 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:18.792191 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:18.792258 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:18.835636 1076050 cri.go:89] found id: ""
	I0127 15:41:18.835674 1076050 logs.go:282] 0 containers: []
	W0127 15:41:18.835685 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:18.835693 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:18.835763 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:18.875370 1076050 cri.go:89] found id: ""
	I0127 15:41:18.875423 1076050 logs.go:282] 0 containers: []
	W0127 15:41:18.875435 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:18.875444 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:18.875517 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:18.915439 1076050 cri.go:89] found id: ""
	I0127 15:41:18.915469 1076050 logs.go:282] 0 containers: []
	W0127 15:41:18.915480 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:18.915489 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:18.915554 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:18.962331 1076050 cri.go:89] found id: ""
	I0127 15:41:18.962359 1076050 logs.go:282] 0 containers: []
	W0127 15:41:18.962366 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:18.962372 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:18.962425 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:19.017809 1076050 cri.go:89] found id: ""
	I0127 15:41:19.017839 1076050 logs.go:282] 0 containers: []
	W0127 15:41:19.017849 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:19.017857 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:19.017924 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:19.066418 1076050 cri.go:89] found id: ""
	I0127 15:41:19.066454 1076050 logs.go:282] 0 containers: []
	W0127 15:41:19.066463 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:19.066469 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:19.066540 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:19.107181 1076050 cri.go:89] found id: ""
	I0127 15:41:19.107212 1076050 logs.go:282] 0 containers: []
	W0127 15:41:19.107221 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:19.107227 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:19.107286 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:19.148999 1076050 cri.go:89] found id: ""
	I0127 15:41:19.149043 1076050 logs.go:282] 0 containers: []
	W0127 15:41:19.149055 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:19.149070 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:19.149093 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:19.235472 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:19.235514 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:19.290762 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:19.290794 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:19.349155 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:19.349201 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:19.365924 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:19.365957 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:19.455480 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:21.957147 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:21.971580 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:21.971732 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:22.011493 1076050 cri.go:89] found id: ""
	I0127 15:41:22.011523 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.011531 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:22.011537 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:22.011600 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:22.047592 1076050 cri.go:89] found id: ""
	I0127 15:41:22.047615 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.047623 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:22.047635 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:22.047704 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:22.084231 1076050 cri.go:89] found id: ""
	I0127 15:41:22.084258 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.084266 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:22.084272 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:22.084331 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:22.126843 1076050 cri.go:89] found id: ""
	I0127 15:41:22.126870 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.126881 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:22.126890 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:22.126952 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:22.167538 1076050 cri.go:89] found id: ""
	I0127 15:41:22.167563 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.167572 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:22.167579 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:22.167633 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:22.206138 1076050 cri.go:89] found id: ""
	I0127 15:41:22.206169 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.206180 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:22.206193 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:22.206259 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:22.245152 1076050 cri.go:89] found id: ""
	I0127 15:41:22.245186 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.245199 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:22.245207 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:22.245273 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:22.280780 1076050 cri.go:89] found id: ""
	I0127 15:41:22.280820 1076050 logs.go:282] 0 containers: []
	W0127 15:41:22.280831 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:22.280844 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:22.280859 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:22.333940 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:22.333975 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:22.348880 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:22.348910 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:22.421581 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:22.421610 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:22.421625 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:22.502157 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:22.502199 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:25.045123 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:25.058997 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:25.059058 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:25.094852 1076050 cri.go:89] found id: ""
	I0127 15:41:25.094881 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.094888 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:25.094896 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:25.094955 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:25.136390 1076050 cri.go:89] found id: ""
	I0127 15:41:25.136414 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.136424 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:25.136432 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:25.136491 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:25.173187 1076050 cri.go:89] found id: ""
	I0127 15:41:25.173213 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.173221 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:25.173226 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:25.173284 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:25.210946 1076050 cri.go:89] found id: ""
	I0127 15:41:25.210977 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.210990 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:25.210999 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:25.211082 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:25.251607 1076050 cri.go:89] found id: ""
	I0127 15:41:25.251633 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.251643 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:25.251649 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:25.251702 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:25.286803 1076050 cri.go:89] found id: ""
	I0127 15:41:25.286831 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.286842 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:25.286849 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:25.286914 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:25.322818 1076050 cri.go:89] found id: ""
	I0127 15:41:25.322846 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.322857 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:25.322866 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:25.322936 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:25.361082 1076050 cri.go:89] found id: ""
	I0127 15:41:25.361110 1076050 logs.go:282] 0 containers: []
	W0127 15:41:25.361120 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:25.361130 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:25.361142 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:25.412378 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:25.412416 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:25.427170 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:25.427206 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:25.498342 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:25.498377 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:25.498393 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:25.589099 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:25.589152 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:28.130224 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:28.145326 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:28.145389 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:28.186258 1076050 cri.go:89] found id: ""
	I0127 15:41:28.186293 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.186316 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:28.186326 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:28.186408 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:28.224332 1076050 cri.go:89] found id: ""
	I0127 15:41:28.224370 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.224382 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:28.224393 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:28.224462 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:28.262236 1076050 cri.go:89] found id: ""
	I0127 15:41:28.262267 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.262274 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:28.262282 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:28.262334 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:28.299248 1076050 cri.go:89] found id: ""
	I0127 15:41:28.299281 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.299290 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:28.299300 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:28.299358 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:28.340255 1076050 cri.go:89] found id: ""
	I0127 15:41:28.340289 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.340301 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:28.340326 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:28.340396 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:28.384857 1076050 cri.go:89] found id: ""
	I0127 15:41:28.384891 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.384903 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:28.384912 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:28.384983 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:28.428121 1076050 cri.go:89] found id: ""
	I0127 15:41:28.428158 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.428169 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:28.428179 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:28.428248 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:28.473305 1076050 cri.go:89] found id: ""
	I0127 15:41:28.473332 1076050 logs.go:282] 0 containers: []
	W0127 15:41:28.473340 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:28.473350 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:28.473368 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:28.571238 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:28.571271 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:28.571316 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:28.651696 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:28.651731 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:28.692842 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:28.692870 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:28.748091 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:28.748133 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:31.262275 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:31.278085 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:31.278174 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:31.313339 1076050 cri.go:89] found id: ""
	I0127 15:41:31.313366 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.313375 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:31.313381 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:31.313450 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:31.351690 1076050 cri.go:89] found id: ""
	I0127 15:41:31.351716 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.351726 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:31.351732 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:31.351797 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:31.387516 1076050 cri.go:89] found id: ""
	I0127 15:41:31.387547 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.387556 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:31.387562 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:31.387617 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:31.422030 1076050 cri.go:89] found id: ""
	I0127 15:41:31.422062 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.422070 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:31.422076 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:31.422134 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:31.458563 1076050 cri.go:89] found id: ""
	I0127 15:41:31.458592 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.458604 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:31.458612 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:31.458679 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:31.496029 1076050 cri.go:89] found id: ""
	I0127 15:41:31.496064 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.496075 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:31.496090 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:31.496156 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:31.543782 1076050 cri.go:89] found id: ""
	I0127 15:41:31.543808 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.543816 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:31.543822 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:31.543874 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:31.581950 1076050 cri.go:89] found id: ""
	I0127 15:41:31.581987 1076050 logs.go:282] 0 containers: []
	W0127 15:41:31.582001 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:31.582014 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:31.582032 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:31.653329 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:31.653358 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:31.653374 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:31.736286 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:31.736323 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:31.782977 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:31.783009 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:31.842741 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:31.842773 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:34.357158 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:34.370137 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:34.370204 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:34.414297 1076050 cri.go:89] found id: ""
	I0127 15:41:34.414334 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.414347 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:34.414356 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:34.414437 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:34.450717 1076050 cri.go:89] found id: ""
	I0127 15:41:34.450749 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.450759 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:34.450767 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:34.450832 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:34.490881 1076050 cri.go:89] found id: ""
	I0127 15:41:34.490915 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.490928 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:34.490937 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:34.491012 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:34.526240 1076050 cri.go:89] found id: ""
	I0127 15:41:34.526277 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.526289 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:34.526297 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:34.526365 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:34.562664 1076050 cri.go:89] found id: ""
	I0127 15:41:34.562700 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.562712 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:34.562721 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:34.562788 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:34.600382 1076050 cri.go:89] found id: ""
	I0127 15:41:34.600411 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.600422 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:34.600430 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:34.600496 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:34.636399 1076050 cri.go:89] found id: ""
	I0127 15:41:34.636431 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.636443 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:34.636451 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:34.636518 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:34.676900 1076050 cri.go:89] found id: ""
	I0127 15:41:34.676935 1076050 logs.go:282] 0 containers: []
	W0127 15:41:34.676948 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:34.676961 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:34.676975 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:34.730519 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:34.730555 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:34.746159 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:34.746188 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:34.823410 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:34.823447 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:34.823468 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:34.907572 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:34.907611 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:37.485412 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:37.499659 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:37.499761 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:37.536578 1076050 cri.go:89] found id: ""
	I0127 15:41:37.536608 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.536618 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:37.536627 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:37.536703 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:37.573737 1076050 cri.go:89] found id: ""
	I0127 15:41:37.573773 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.573783 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:37.573790 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:37.573861 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:37.611200 1076050 cri.go:89] found id: ""
	I0127 15:41:37.611232 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.611241 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:37.611248 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:37.611302 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:37.646784 1076050 cri.go:89] found id: ""
	I0127 15:41:37.646812 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.646823 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:37.646832 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:37.646900 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:37.684664 1076050 cri.go:89] found id: ""
	I0127 15:41:37.684694 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.684706 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:37.684714 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:37.684777 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:37.721812 1076050 cri.go:89] found id: ""
	I0127 15:41:37.721850 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.721863 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:37.721874 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:37.721944 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:37.759256 1076050 cri.go:89] found id: ""
	I0127 15:41:37.759279 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.759287 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:37.759293 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:37.759345 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:37.798971 1076050 cri.go:89] found id: ""
	I0127 15:41:37.799004 1076050 logs.go:282] 0 containers: []
	W0127 15:41:37.799017 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:37.799030 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:37.799041 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:37.855679 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:37.855719 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:37.869799 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:37.869833 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:37.943918 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:37.943944 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:37.943956 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:38.035563 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:38.035611 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:40.581178 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:40.597341 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:40.597409 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:40.634799 1076050 cri.go:89] found id: ""
	I0127 15:41:40.634827 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.634836 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:40.634843 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:40.634910 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:40.684392 1076050 cri.go:89] found id: ""
	I0127 15:41:40.684421 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.684429 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:40.684437 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:40.684504 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:40.729085 1076050 cri.go:89] found id: ""
	I0127 15:41:40.729120 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.729131 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:40.729139 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:40.729212 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:40.778437 1076050 cri.go:89] found id: ""
	I0127 15:41:40.778469 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.778482 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:40.778489 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:40.778556 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:40.820889 1076050 cri.go:89] found id: ""
	I0127 15:41:40.820914 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.820922 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:40.820928 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:40.820992 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:40.858256 1076050 cri.go:89] found id: ""
	I0127 15:41:40.858284 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.858296 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:40.858304 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:40.858374 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:40.897931 1076050 cri.go:89] found id: ""
	I0127 15:41:40.897957 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.897966 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:40.897972 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:40.898026 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:40.937068 1076050 cri.go:89] found id: ""
	I0127 15:41:40.937100 1076050 logs.go:282] 0 containers: []
	W0127 15:41:40.937111 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:40.937124 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:40.937138 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:41.012844 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:41.012867 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:41.012880 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:41.093680 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:41.093722 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:41.136964 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:41.136996 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:41.190396 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:41.190435 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:43.708328 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:43.722838 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:43.722928 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:43.762360 1076050 cri.go:89] found id: ""
	I0127 15:41:43.762395 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.762407 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:43.762416 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:43.762483 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:43.802226 1076050 cri.go:89] found id: ""
	I0127 15:41:43.802266 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.802279 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:43.802287 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:43.802363 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:43.848037 1076050 cri.go:89] found id: ""
	I0127 15:41:43.848067 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.848081 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:43.848100 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:43.848167 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:43.891393 1076050 cri.go:89] found id: ""
	I0127 15:41:43.891491 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.891506 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:43.891516 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:43.891585 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:43.936352 1076050 cri.go:89] found id: ""
	I0127 15:41:43.936447 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.936467 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:43.936481 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:43.936632 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:43.980165 1076050 cri.go:89] found id: ""
	I0127 15:41:43.980192 1076050 logs.go:282] 0 containers: []
	W0127 15:41:43.980200 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:43.980206 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:43.980264 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:44.019889 1076050 cri.go:89] found id: ""
	I0127 15:41:44.019925 1076050 logs.go:282] 0 containers: []
	W0127 15:41:44.019938 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:44.019946 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:44.020005 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:44.057363 1076050 cri.go:89] found id: ""
	I0127 15:41:44.057400 1076050 logs.go:282] 0 containers: []
	W0127 15:41:44.057412 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:44.057426 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:44.057442 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:44.072218 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:44.072249 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:44.148918 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:44.148944 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:44.148960 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:44.231300 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:44.231347 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:44.273468 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:44.273507 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:46.833142 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:46.848106 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:46.848174 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:46.886223 1076050 cri.go:89] found id: ""
	I0127 15:41:46.886250 1076050 logs.go:282] 0 containers: []
	W0127 15:41:46.886258 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:46.886264 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:46.886315 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:46.923854 1076050 cri.go:89] found id: ""
	I0127 15:41:46.923883 1076050 logs.go:282] 0 containers: []
	W0127 15:41:46.923891 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:46.923903 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:46.923956 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:46.962084 1076050 cri.go:89] found id: ""
	I0127 15:41:46.962112 1076050 logs.go:282] 0 containers: []
	W0127 15:41:46.962120 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:46.962128 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:46.962189 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:46.998299 1076050 cri.go:89] found id: ""
	I0127 15:41:46.998329 1076050 logs.go:282] 0 containers: []
	W0127 15:41:46.998338 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:46.998344 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:46.998401 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:47.036481 1076050 cri.go:89] found id: ""
	I0127 15:41:47.036519 1076050 logs.go:282] 0 containers: []
	W0127 15:41:47.036531 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:47.036540 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:47.036606 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:47.072486 1076050 cri.go:89] found id: ""
	I0127 15:41:47.072522 1076050 logs.go:282] 0 containers: []
	W0127 15:41:47.072534 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:47.072543 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:47.072610 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:47.116871 1076050 cri.go:89] found id: ""
	I0127 15:41:47.116912 1076050 logs.go:282] 0 containers: []
	W0127 15:41:47.116937 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:47.116947 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:47.117049 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:47.157060 1076050 cri.go:89] found id: ""
	I0127 15:41:47.157092 1076050 logs.go:282] 0 containers: []
	W0127 15:41:47.157104 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:47.157118 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:47.157135 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:47.210998 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:47.211040 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:47.224898 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:47.224926 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:47.306490 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:47.306521 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:47.306540 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:47.394529 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:47.394582 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:49.942182 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:49.958258 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:49.958321 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:49.997962 1076050 cri.go:89] found id: ""
	I0127 15:41:49.997999 1076050 logs.go:282] 0 containers: []
	W0127 15:41:49.998019 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:49.998029 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:49.998091 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:50.042973 1076050 cri.go:89] found id: ""
	I0127 15:41:50.043007 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.043015 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:50.043021 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:50.043078 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:50.080466 1076050 cri.go:89] found id: ""
	I0127 15:41:50.080496 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.080506 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:50.080514 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:50.080581 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:50.122155 1076050 cri.go:89] found id: ""
	I0127 15:41:50.122187 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.122199 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:50.122208 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:50.122270 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:50.160215 1076050 cri.go:89] found id: ""
	I0127 15:41:50.160245 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.160254 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:50.160262 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:50.160315 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:50.200684 1076050 cri.go:89] found id: ""
	I0127 15:41:50.200710 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.200719 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:50.200724 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:50.200790 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:50.238625 1076050 cri.go:89] found id: ""
	I0127 15:41:50.238650 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.238658 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:50.238664 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:50.238721 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:50.276187 1076050 cri.go:89] found id: ""
	I0127 15:41:50.276217 1076050 logs.go:282] 0 containers: []
	W0127 15:41:50.276227 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:50.276238 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:50.276258 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:50.327617 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:50.327675 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:50.343530 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:50.343561 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:50.420740 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:50.420764 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:50.420776 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:50.506757 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:50.506809 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:53.057745 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:53.073259 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:53.073338 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:53.111798 1076050 cri.go:89] found id: ""
	I0127 15:41:53.111831 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.111839 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:53.111849 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:53.111921 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:53.151928 1076050 cri.go:89] found id: ""
	I0127 15:41:53.151959 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.151970 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:53.151978 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:53.152045 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:53.187310 1076050 cri.go:89] found id: ""
	I0127 15:41:53.187357 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.187369 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:53.187377 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:53.187443 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:53.230758 1076050 cri.go:89] found id: ""
	I0127 15:41:53.230786 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.230795 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:53.230800 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:53.230852 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:53.266244 1076050 cri.go:89] found id: ""
	I0127 15:41:53.266276 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.266285 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:53.266291 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:53.266356 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:53.302601 1076050 cri.go:89] found id: ""
	I0127 15:41:53.302628 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.302638 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:53.302647 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:53.302710 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:53.342505 1076050 cri.go:89] found id: ""
	I0127 15:41:53.342541 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.342551 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:53.342561 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:53.342643 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:53.379672 1076050 cri.go:89] found id: ""
	I0127 15:41:53.379706 1076050 logs.go:282] 0 containers: []
	W0127 15:41:53.379718 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:53.379730 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:53.379745 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:53.421809 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:53.421852 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:53.475330 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:53.475369 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:53.490625 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:53.490652 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:53.560602 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:53.560627 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:53.560637 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:56.148600 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:56.162485 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:56.162564 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:56.200397 1076050 cri.go:89] found id: ""
	I0127 15:41:56.200434 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.200447 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:56.200458 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:56.200523 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:56.236022 1076050 cri.go:89] found id: ""
	I0127 15:41:56.236067 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.236078 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:56.236086 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:56.236154 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:56.275920 1076050 cri.go:89] found id: ""
	I0127 15:41:56.275956 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.275966 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:56.275975 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:56.276046 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:56.312921 1076050 cri.go:89] found id: ""
	I0127 15:41:56.312953 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.312963 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:56.312971 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:56.313056 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:56.352348 1076050 cri.go:89] found id: ""
	I0127 15:41:56.352373 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.352381 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:56.352387 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:56.352440 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:56.398556 1076050 cri.go:89] found id: ""
	I0127 15:41:56.398591 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.398603 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:56.398617 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:56.398686 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:56.440032 1076050 cri.go:89] found id: ""
	I0127 15:41:56.440063 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.440071 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:56.440078 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:56.440137 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:56.476249 1076050 cri.go:89] found id: ""
	I0127 15:41:56.476280 1076050 logs.go:282] 0 containers: []
	W0127 15:41:56.476291 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:56.476305 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:56.476321 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:56.530965 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:56.531017 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:56.545838 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:56.545869 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:56.618187 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:41:56.618245 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:56.618257 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:56.701048 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:56.701087 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:59.248508 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:41:59.262851 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:41:59.262928 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:41:59.300917 1076050 cri.go:89] found id: ""
	I0127 15:41:59.300947 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.300959 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:41:59.300967 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:41:59.301062 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:41:59.345421 1076050 cri.go:89] found id: ""
	I0127 15:41:59.345452 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.345463 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:41:59.345471 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:41:59.345568 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:41:59.381990 1076050 cri.go:89] found id: ""
	I0127 15:41:59.382025 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.382037 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:41:59.382046 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:41:59.382115 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:41:59.420410 1076050 cri.go:89] found id: ""
	I0127 15:41:59.420456 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.420466 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:41:59.420472 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:41:59.420543 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:41:59.461365 1076050 cri.go:89] found id: ""
	I0127 15:41:59.461391 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.461403 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:41:59.461412 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:41:59.461480 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:41:59.497094 1076050 cri.go:89] found id: ""
	I0127 15:41:59.497122 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.497130 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:41:59.497136 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:41:59.497201 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:41:59.537636 1076050 cri.go:89] found id: ""
	I0127 15:41:59.537663 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.537672 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:41:59.537680 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:41:59.537780 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:41:59.572954 1076050 cri.go:89] found id: ""
	I0127 15:41:59.572984 1076050 logs.go:282] 0 containers: []
	W0127 15:41:59.572993 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:41:59.573023 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:41:59.573039 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:41:59.660416 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:41:59.660457 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:41:59.702396 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:41:59.702423 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:41:59.758534 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:41:59.758583 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:41:59.772463 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:41:59.772496 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:41:59.849599 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:02.350500 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:02.364408 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:02.364483 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:02.400537 1076050 cri.go:89] found id: ""
	I0127 15:42:02.400574 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.400588 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:02.400596 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:02.400664 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:02.442696 1076050 cri.go:89] found id: ""
	I0127 15:42:02.442731 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.442743 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:02.442751 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:02.442825 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:02.485485 1076050 cri.go:89] found id: ""
	I0127 15:42:02.485511 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.485522 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:02.485529 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:02.485595 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:02.524989 1076050 cri.go:89] found id: ""
	I0127 15:42:02.525036 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.525048 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:02.525057 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:02.525137 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:02.560538 1076050 cri.go:89] found id: ""
	I0127 15:42:02.560567 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.560578 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:02.560586 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:02.560649 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:02.602960 1076050 cri.go:89] found id: ""
	I0127 15:42:02.602996 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.603008 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:02.603017 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:02.603082 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:02.645389 1076050 cri.go:89] found id: ""
	I0127 15:42:02.645415 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.645425 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:02.645436 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:02.645502 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:02.689493 1076050 cri.go:89] found id: ""
	I0127 15:42:02.689526 1076050 logs.go:282] 0 containers: []
	W0127 15:42:02.689537 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:02.689549 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:02.689578 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:02.746806 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:02.746848 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:02.761212 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:02.761243 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:02.841116 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:02.841135 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:02.841147 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:02.932117 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:02.932159 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:05.477139 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:05.491255 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:05.491337 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:05.527520 1076050 cri.go:89] found id: ""
	I0127 15:42:05.527551 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.527563 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:05.527572 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:05.527639 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:05.569699 1076050 cri.go:89] found id: ""
	I0127 15:42:05.569731 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.569743 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:05.569752 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:05.569825 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:05.607615 1076050 cri.go:89] found id: ""
	I0127 15:42:05.607654 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.607667 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:05.607677 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:05.607750 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:05.644591 1076050 cri.go:89] found id: ""
	I0127 15:42:05.644622 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.644634 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:05.644642 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:05.644693 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:05.684235 1076050 cri.go:89] found id: ""
	I0127 15:42:05.684258 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.684265 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:05.684272 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:05.684327 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:05.722858 1076050 cri.go:89] found id: ""
	I0127 15:42:05.722902 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.722914 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:05.722924 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:05.722989 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:05.759028 1076050 cri.go:89] found id: ""
	I0127 15:42:05.759062 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.759074 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:05.759082 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:05.759203 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:05.799551 1076050 cri.go:89] found id: ""
	I0127 15:42:05.799580 1076050 logs.go:282] 0 containers: []
	W0127 15:42:05.799592 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:05.799608 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:05.799624 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:05.859709 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:05.859763 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:05.873857 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:05.873893 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:05.950048 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:05.950080 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:05.950097 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:06.027916 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:06.027961 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:08.576361 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:08.591092 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:08.591172 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:08.629233 1076050 cri.go:89] found id: ""
	I0127 15:42:08.629262 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.629271 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:08.629277 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:08.629330 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:08.664138 1076050 cri.go:89] found id: ""
	I0127 15:42:08.664172 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.664183 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:08.664192 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:08.664254 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:08.702076 1076050 cri.go:89] found id: ""
	I0127 15:42:08.702113 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.702124 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:08.702132 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:08.702195 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:08.738780 1076050 cri.go:89] found id: ""
	I0127 15:42:08.738813 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.738823 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:08.738831 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:08.738904 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:08.773890 1076050 cri.go:89] found id: ""
	I0127 15:42:08.773922 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.773930 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:08.773936 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:08.773987 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:08.808430 1076050 cri.go:89] found id: ""
	I0127 15:42:08.808465 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.808477 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:08.808485 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:08.808553 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:08.844590 1076050 cri.go:89] found id: ""
	I0127 15:42:08.844615 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.844626 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:08.844634 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:08.844701 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:08.888333 1076050 cri.go:89] found id: ""
	I0127 15:42:08.888368 1076050 logs.go:282] 0 containers: []
	W0127 15:42:08.888377 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:08.888388 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:08.888420 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:08.941417 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:08.941453 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:08.956868 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:08.956942 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:09.049362 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:09.049390 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:09.049406 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:09.129215 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:09.129255 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:11.675550 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:11.690737 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:11.690808 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:11.727524 1076050 cri.go:89] found id: ""
	I0127 15:42:11.727554 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.727564 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:11.727572 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:11.727635 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:11.764046 1076050 cri.go:89] found id: ""
	I0127 15:42:11.764073 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.764082 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:11.764089 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:11.764142 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:11.799530 1076050 cri.go:89] found id: ""
	I0127 15:42:11.799562 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.799574 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:11.799582 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:11.799647 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:11.839880 1076050 cri.go:89] found id: ""
	I0127 15:42:11.839912 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.839921 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:11.839927 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:11.839989 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:11.876263 1076050 cri.go:89] found id: ""
	I0127 15:42:11.876313 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.876324 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:11.876332 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:11.876403 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:11.919106 1076050 cri.go:89] found id: ""
	I0127 15:42:11.919136 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.919144 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:11.919150 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:11.919209 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:11.957253 1076050 cri.go:89] found id: ""
	I0127 15:42:11.957285 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.957296 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:11.957304 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:11.957369 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:11.993481 1076050 cri.go:89] found id: ""
	I0127 15:42:11.993515 1076050 logs.go:282] 0 containers: []
	W0127 15:42:11.993527 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:11.993544 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:11.993560 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:12.063236 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:12.063264 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:12.063285 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:12.149889 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:12.149932 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:12.195704 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:12.195730 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:12.254422 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:12.254457 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:14.768483 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:14.782452 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:14.782539 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:14.822523 1076050 cri.go:89] found id: ""
	I0127 15:42:14.822558 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.822570 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:14.822576 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:14.822654 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:14.861058 1076050 cri.go:89] found id: ""
	I0127 15:42:14.861085 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.861094 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:14.861099 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:14.861164 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:14.898147 1076050 cri.go:89] found id: ""
	I0127 15:42:14.898178 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.898189 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:14.898199 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:14.898265 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:14.936269 1076050 cri.go:89] found id: ""
	I0127 15:42:14.936299 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.936307 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:14.936313 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:14.936378 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:14.971287 1076050 cri.go:89] found id: ""
	I0127 15:42:14.971320 1076050 logs.go:282] 0 containers: []
	W0127 15:42:14.971332 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:14.971341 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:14.971394 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:15.011649 1076050 cri.go:89] found id: ""
	I0127 15:42:15.011679 1076050 logs.go:282] 0 containers: []
	W0127 15:42:15.011687 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:15.011693 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:15.011744 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:15.047290 1076050 cri.go:89] found id: ""
	I0127 15:42:15.047329 1076050 logs.go:282] 0 containers: []
	W0127 15:42:15.047340 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:15.047349 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:15.047413 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:15.089625 1076050 cri.go:89] found id: ""
	I0127 15:42:15.089655 1076050 logs.go:282] 0 containers: []
	W0127 15:42:15.089667 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:15.089680 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:15.089694 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:15.136374 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:15.136410 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:15.195628 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:15.195676 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:15.213575 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:15.213679 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:15.293664 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:15.293694 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:15.293707 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:17.882520 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:17.896333 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:17.896403 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:17.935049 1076050 cri.go:89] found id: ""
	I0127 15:42:17.935078 1076050 logs.go:282] 0 containers: []
	W0127 15:42:17.935088 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:17.935096 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:17.935158 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:17.972911 1076050 cri.go:89] found id: ""
	I0127 15:42:17.972946 1076050 logs.go:282] 0 containers: []
	W0127 15:42:17.972958 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:17.972967 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:17.973073 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:18.017249 1076050 cri.go:89] found id: ""
	I0127 15:42:18.017276 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.017286 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:18.017292 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:18.017353 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:18.059963 1076050 cri.go:89] found id: ""
	I0127 15:42:18.059995 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.060007 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:18.060016 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:18.060086 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:18.106174 1076050 cri.go:89] found id: ""
	I0127 15:42:18.106219 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.106232 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:18.106248 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:18.106318 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:18.146130 1076050 cri.go:89] found id: ""
	I0127 15:42:18.146161 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.146176 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:18.146184 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:18.146256 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:18.184143 1076050 cri.go:89] found id: ""
	I0127 15:42:18.184176 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.184185 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:18.184191 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:18.184246 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:18.225042 1076050 cri.go:89] found id: ""
	I0127 15:42:18.225084 1076050 logs.go:282] 0 containers: []
	W0127 15:42:18.225096 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:18.225110 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:18.225127 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:18.263543 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:18.263577 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:18.321274 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:18.321323 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:18.336830 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:18.336861 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:18.420928 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:18.420955 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:18.420971 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:21.014731 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:21.030978 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:21.031048 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:21.071340 1076050 cri.go:89] found id: ""
	I0127 15:42:21.071370 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.071378 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:21.071385 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:21.071442 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:21.107955 1076050 cri.go:89] found id: ""
	I0127 15:42:21.107987 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.107999 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:21.108006 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:21.108073 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:21.148426 1076050 cri.go:89] found id: ""
	I0127 15:42:21.148465 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.148477 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:21.148488 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:21.148561 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:21.199228 1076050 cri.go:89] found id: ""
	I0127 15:42:21.199262 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.199273 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:21.199282 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:21.199353 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:21.259122 1076050 cri.go:89] found id: ""
	I0127 15:42:21.259156 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.259167 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:21.259175 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:21.259249 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:21.316242 1076050 cri.go:89] found id: ""
	I0127 15:42:21.316288 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.316300 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:21.316309 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:21.316378 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:21.360071 1076050 cri.go:89] found id: ""
	I0127 15:42:21.360104 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.360116 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:21.360125 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:21.360190 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:21.405056 1076050 cri.go:89] found id: ""
	I0127 15:42:21.405088 1076050 logs.go:282] 0 containers: []
	W0127 15:42:21.405099 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:21.405112 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:21.405129 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:21.419657 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:21.419688 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:21.495931 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:21.495957 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:21.495973 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:21.578029 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:21.578075 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:21.626705 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:21.626742 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:24.180267 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:24.193848 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:24.193927 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:24.232734 1076050 cri.go:89] found id: ""
	I0127 15:42:24.232767 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.232778 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:24.232787 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:24.232855 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:24.274373 1076050 cri.go:89] found id: ""
	I0127 15:42:24.274410 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.274421 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:24.274430 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:24.274486 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:24.314420 1076050 cri.go:89] found id: ""
	I0127 15:42:24.314449 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.314459 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:24.314469 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:24.314533 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:24.353247 1076050 cri.go:89] found id: ""
	I0127 15:42:24.353284 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.353302 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:24.353311 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:24.353380 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:24.395518 1076050 cri.go:89] found id: ""
	I0127 15:42:24.395545 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.395556 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:24.395564 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:24.395630 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:24.433954 1076050 cri.go:89] found id: ""
	I0127 15:42:24.433988 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.433999 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:24.434008 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:24.434078 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:24.475406 1076050 cri.go:89] found id: ""
	I0127 15:42:24.475438 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.475451 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:24.475460 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:24.475530 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:24.511024 1076050 cri.go:89] found id: ""
	I0127 15:42:24.511062 1076050 logs.go:282] 0 containers: []
	W0127 15:42:24.511074 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:24.511086 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:24.511105 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:24.585723 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:24.585746 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:24.585766 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:24.666956 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:24.666997 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:24.707929 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:24.707953 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:24.761870 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:24.761906 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:27.276721 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:27.292246 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:42:27.292341 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:42:27.332682 1076050 cri.go:89] found id: ""
	I0127 15:42:27.332715 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.332725 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:42:27.332733 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:42:27.332804 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:42:27.368942 1076050 cri.go:89] found id: ""
	I0127 15:42:27.368975 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.368988 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:42:27.368997 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:42:27.369083 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:42:27.406074 1076050 cri.go:89] found id: ""
	I0127 15:42:27.406116 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.406133 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:42:27.406141 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:42:27.406195 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:42:27.443019 1076050 cri.go:89] found id: ""
	I0127 15:42:27.443049 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.443061 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:42:27.443069 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:42:27.443136 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:42:27.478322 1076050 cri.go:89] found id: ""
	I0127 15:42:27.478359 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.478370 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:42:27.478380 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:42:27.478463 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:42:27.517749 1076050 cri.go:89] found id: ""
	I0127 15:42:27.517781 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.517793 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:42:27.517802 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:42:27.517868 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:42:27.556151 1076050 cri.go:89] found id: ""
	I0127 15:42:27.556182 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.556191 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:42:27.556197 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:42:27.556260 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:42:27.594607 1076050 cri.go:89] found id: ""
	I0127 15:42:27.594638 1076050 logs.go:282] 0 containers: []
	W0127 15:42:27.594646 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:42:27.594656 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:42:27.594666 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 15:42:27.675142 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:42:27.675184 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:42:27.719306 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:42:27.719341 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:42:27.771036 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:42:27.771076 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:42:27.785422 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:42:27.785451 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:42:27.863147 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:42:30.364006 1076050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:42:30.378275 1076050 kubeadm.go:597] duration metric: took 4m3.244067669s to restartPrimaryControlPlane
	W0127 15:42:30.378392 1076050 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 15:42:30.378427 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:42:32.324859 1076050 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.946405854s)
	I0127 15:42:32.324949 1076050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:42:32.342099 1076050 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 15:42:32.353110 1076050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:42:32.365238 1076050 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:42:32.365259 1076050 kubeadm.go:157] found existing configuration files:
	
	I0127 15:42:32.365309 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:42:32.376623 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:42:32.376679 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:42:32.387533 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:42:32.397645 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:42:32.397706 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:42:32.409015 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:42:32.420172 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:42:32.420236 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:42:32.430688 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:42:32.441797 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:42:32.441856 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:42:32.452009 1076050 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:42:32.678031 1076050 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:44:29.249145 1076050 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 15:44:29.249258 1076050 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 15:44:29.250830 1076050 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 15:44:29.250891 1076050 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:44:29.251016 1076050 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:44:29.251168 1076050 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:44:29.251317 1076050 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 15:44:29.251390 1076050 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:44:29.253163 1076050 out.go:235]   - Generating certificates and keys ...
	I0127 15:44:29.253266 1076050 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:44:29.253389 1076050 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:44:29.253470 1076050 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:44:29.253522 1076050 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:44:29.253581 1076050 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:44:29.253626 1076050 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:44:29.253704 1076050 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:44:29.253772 1076050 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:44:29.253864 1076050 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:44:29.253956 1076050 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:44:29.254008 1076050 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:44:29.254112 1076050 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:44:29.254215 1076050 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:44:29.254305 1076050 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:44:29.254391 1076050 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:44:29.254466 1076050 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:44:29.254625 1076050 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:44:29.254763 1076050 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:44:29.254826 1076050 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:44:29.254989 1076050 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:44:29.256624 1076050 out.go:235]   - Booting up control plane ...
	I0127 15:44:29.256744 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:44:29.256829 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:44:29.256905 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:44:29.257025 1076050 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:44:29.257228 1076050 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 15:44:29.257290 1076050 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 15:44:29.257373 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.257657 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.257767 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.257963 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.258031 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.258254 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.258355 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.258591 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.258669 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:44:29.258862 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:44:29.258871 1076050 kubeadm.go:310] 
	I0127 15:44:29.258904 1076050 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 15:44:29.258972 1076050 kubeadm.go:310] 		timed out waiting for the condition
	I0127 15:44:29.258989 1076050 kubeadm.go:310] 
	I0127 15:44:29.259027 1076050 kubeadm.go:310] 	This error is likely caused by:
	I0127 15:44:29.259057 1076050 kubeadm.go:310] 		- The kubelet is not running
	I0127 15:44:29.259205 1076050 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 15:44:29.259221 1076050 kubeadm.go:310] 
	I0127 15:44:29.259358 1076050 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 15:44:29.259391 1076050 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 15:44:29.259444 1076050 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 15:44:29.259459 1076050 kubeadm.go:310] 
	I0127 15:44:29.259593 1076050 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 15:44:29.259701 1076050 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 15:44:29.259710 1076050 kubeadm.go:310] 
	I0127 15:44:29.259818 1076050 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 15:44:29.259940 1076050 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 15:44:29.260041 1076050 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 15:44:29.260150 1076050 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 15:44:29.260179 1076050 kubeadm.go:310] 
	W0127 15:44:29.260362 1076050 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 15:44:29.260421 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 15:44:29.751111 1076050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:44:29.767368 1076050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 15:44:29.778471 1076050 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 15:44:29.778498 1076050 kubeadm.go:157] found existing configuration files:
	
	I0127 15:44:29.778554 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 15:44:29.789258 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 15:44:29.789331 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 15:44:29.799796 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 15:44:29.809761 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 15:44:29.809824 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 15:44:29.819822 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 15:44:29.829277 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 15:44:29.829350 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 15:44:29.840607 1076050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 15:44:29.850589 1076050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 15:44:29.850656 1076050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 15:44:29.860352 1076050 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 15:44:29.931615 1076050 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 15:44:29.931737 1076050 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 15:44:30.090907 1076050 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 15:44:30.091038 1076050 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 15:44:30.091180 1076050 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 15:44:30.288545 1076050 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 15:44:30.290548 1076050 out.go:235]   - Generating certificates and keys ...
	I0127 15:44:30.290678 1076050 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 15:44:30.290777 1076050 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 15:44:30.290899 1076050 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 15:44:30.290993 1076050 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 15:44:30.291119 1076050 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 15:44:30.291213 1076050 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 15:44:30.291312 1076050 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 15:44:30.291399 1076050 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 15:44:30.291523 1076050 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 15:44:30.291640 1076050 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 15:44:30.291718 1076050 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 15:44:30.291806 1076050 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 15:44:30.471428 1076050 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 15:44:30.705804 1076050 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 15:44:30.959802 1076050 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 15:44:31.149201 1076050 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 15:44:31.173695 1076050 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 15:44:31.174653 1076050 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 15:44:31.174752 1076050 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 15:44:31.342124 1076050 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 15:44:31.344077 1076050 out.go:235]   - Booting up control plane ...
	I0127 15:44:31.344184 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 15:44:31.348014 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 15:44:31.349159 1076050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 15:44:31.349960 1076050 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 15:44:31.352168 1076050 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 15:45:11.354910 1076050 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 15:45:11.355380 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:45:11.355582 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:45:16.356239 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:45:16.356487 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:45:26.357276 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:45:26.357605 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:45:46.358046 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:45:46.358293 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:46:26.356549 1076050 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 15:46:26.356813 1076050 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 15:46:26.356830 1076050 kubeadm.go:310] 
	I0127 15:46:26.356897 1076050 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 15:46:26.356938 1076050 kubeadm.go:310] 		timed out waiting for the condition
	I0127 15:46:26.356949 1076050 kubeadm.go:310] 
	I0127 15:46:26.357026 1076050 kubeadm.go:310] 	This error is likely caused by:
	I0127 15:46:26.357106 1076050 kubeadm.go:310] 		- The kubelet is not running
	I0127 15:46:26.357302 1076050 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 15:46:26.357336 1076050 kubeadm.go:310] 
	I0127 15:46:26.357498 1076050 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 15:46:26.357548 1076050 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 15:46:26.357607 1076050 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 15:46:26.357624 1076050 kubeadm.go:310] 
	I0127 15:46:26.357766 1076050 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 15:46:26.357862 1076050 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 15:46:26.357878 1076050 kubeadm.go:310] 
	I0127 15:46:26.358043 1076050 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 15:46:26.358166 1076050 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 15:46:26.358290 1076050 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 15:46:26.358368 1076050 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 15:46:26.358379 1076050 kubeadm.go:310] 
	I0127 15:46:26.358971 1076050 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 15:46:26.359102 1076050 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 15:46:26.359219 1076050 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 15:46:26.359281 1076050 kubeadm.go:394] duration metric: took 7m59.27977519s to StartCluster
	I0127 15:46:26.359443 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 15:46:26.359522 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 15:46:26.408713 1076050 cri.go:89] found id: ""
	I0127 15:46:26.408752 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.408764 1076050 logs.go:284] No container was found matching "kube-apiserver"
	I0127 15:46:26.408772 1076050 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 15:46:26.408832 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 15:46:26.449156 1076050 cri.go:89] found id: ""
	I0127 15:46:26.449190 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.449200 1076050 logs.go:284] No container was found matching "etcd"
	I0127 15:46:26.449208 1076050 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 15:46:26.449306 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 15:46:26.487786 1076050 cri.go:89] found id: ""
	I0127 15:46:26.487812 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.487820 1076050 logs.go:284] No container was found matching "coredns"
	I0127 15:46:26.487827 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 15:46:26.487876 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 15:46:26.546745 1076050 cri.go:89] found id: ""
	I0127 15:46:26.546772 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.546782 1076050 logs.go:284] No container was found matching "kube-scheduler"
	I0127 15:46:26.546791 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 15:46:26.546855 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 15:46:26.584262 1076050 cri.go:89] found id: ""
	I0127 15:46:26.584300 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.584308 1076050 logs.go:284] No container was found matching "kube-proxy"
	I0127 15:46:26.584316 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 15:46:26.584385 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 15:46:26.622575 1076050 cri.go:89] found id: ""
	I0127 15:46:26.622608 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.622617 1076050 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 15:46:26.622623 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 15:46:26.622683 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 15:46:26.660928 1076050 cri.go:89] found id: ""
	I0127 15:46:26.660955 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.660964 1076050 logs.go:284] No container was found matching "kindnet"
	I0127 15:46:26.660970 1076050 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 15:46:26.661062 1076050 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 15:46:26.698084 1076050 cri.go:89] found id: ""
	I0127 15:46:26.698116 1076050 logs.go:282] 0 containers: []
	W0127 15:46:26.698125 1076050 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 15:46:26.698139 1076050 logs.go:123] Gathering logs for container status ...
	I0127 15:46:26.698151 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 15:46:26.742459 1076050 logs.go:123] Gathering logs for kubelet ...
	I0127 15:46:26.742486 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 15:46:26.797935 1076050 logs.go:123] Gathering logs for dmesg ...
	I0127 15:46:26.797977 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 15:46:26.814213 1076050 logs.go:123] Gathering logs for describe nodes ...
	I0127 15:46:26.814248 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 15:46:26.903335 1076050 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 15:46:26.903373 1076050 logs.go:123] Gathering logs for CRI-O ...
	I0127 15:46:26.903392 1076050 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0127 15:46:27.016392 1076050 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 15:46:27.016470 1076050 out.go:270] * 
	W0127 15:46:27.016547 1076050 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 15:46:27.016561 1076050 out.go:270] * 
	W0127 15:46:27.017322 1076050 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 15:46:27.020682 1076050 out.go:201] 
	W0127 15:46:27.022217 1076050 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 15:46:27.022269 1076050 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 15:46:27.022288 1076050 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 15:46:27.023966 1076050 out.go:201] 
	
	
	==> CRI-O <==
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.663443567Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993672663336095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2bd77831-cbe8-445f-9a78-7b8e6e053234 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.664202114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b270be1-8396-43db-a553-fe33d5191009 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.664287390Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b270be1-8396-43db-a553-fe33d5191009 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.664334451Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3b270be1-8396-43db-a553-fe33d5191009 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.699079032Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ddc11f31-28a1-433d-9615-873d11a67e32 name=/runtime.v1.RuntimeService/Version
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.699164810Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ddc11f31-28a1-433d-9615-873d11a67e32 name=/runtime.v1.RuntimeService/Version
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.700506254Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9e18428-3ecf-42e6-b0c0-d43416fb82f2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.700902325Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993672700883017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9e18428-3ecf-42e6-b0c0-d43416fb82f2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.701314279Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=575a8ad5-64df-4706-934e-86cc8bf025be name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.701449459Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=575a8ad5-64df-4706-934e-86cc8bf025be name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.701483250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=575a8ad5-64df-4706-934e-86cc8bf025be name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.735325701Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=41fb1c17-7de5-40d1-99c6-92f7fc24ec3a name=/runtime.v1.RuntimeService/Version
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.735471236Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=41fb1c17-7de5-40d1-99c6-92f7fc24ec3a name=/runtime.v1.RuntimeService/Version
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.736644413Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca0d81fb-fcfe-42f6-9eff-6dbd146586c4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.737028129Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993672736999631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca0d81fb-fcfe-42f6-9eff-6dbd146586c4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.737558219Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=930d4f29-091e-41f4-a5dd-3da7f14e5634 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.737617445Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=930d4f29-091e-41f4-a5dd-3da7f14e5634 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.737705651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=930d4f29-091e-41f4-a5dd-3da7f14e5634 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.773749963Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ff458cd-b5e0-48b7-ae0f-49a685eff03d name=/runtime.v1.RuntimeService/Version
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.773827529Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ff458cd-b5e0-48b7-ae0f-49a685eff03d name=/runtime.v1.RuntimeService/Version
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.775274850Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ba56475-771f-42f0-913b-b29be2013499 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.775710286Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737993672775678665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ba56475-771f-42f0-913b-b29be2013499 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.776250071Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c563756c-c32c-442c-b1ef-9df0aab41d2b name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.776315384Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c563756c-c32c-442c-b1ef-9df0aab41d2b name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 16:01:12 old-k8s-version-405706 crio[634]: time="2025-01-27 16:01:12.776351285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c563756c-c32c-442c-b1ef-9df0aab41d2b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan27 15:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054128] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043515] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.175374] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.998732] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.641220] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.061271] systemd-fstab-generator[561]: Ignoring "noauto" option for root device
	[  +0.065012] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073970] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.202651] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.132479] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.248883] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +6.567266] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.063012] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.058094] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[ +12.932312] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 15:42] systemd-fstab-generator[5003]: Ignoring "noauto" option for root device
	[Jan27 15:44] systemd-fstab-generator[5276]: Ignoring "noauto" option for root device
	[  +0.074147] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 16:01:12 up 23 min,  0 users,  load average: 0.04, 0.03, 0.04
	Linux old-k8s-version-405706 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]: k8s.io/kubernetes/pkg/kubelet/config.tryDecodeSinglePod(0xc000948000, 0x907, 0xe00, 0xc00095dc10, 0x907, 0xe00, 0x0, 0x0)
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/config/common.go:123 +0x119
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]: k8s.io/kubernetes/pkg/kubelet/config.(*sourceFile).extractFromFile(0xc0008d9400, 0xc00090a210, 0x23, 0x0, 0x0, 0x0)
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/config/file.go:228 +0x292
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]: k8s.io/kubernetes/pkg/kubelet/config.(*sourceFile).extractFromDir(0xc0008d9400, 0xc000695320, 0x19, 0xc0007196c0, 0x0, 0x0, 0x0, 0x0)
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/config/file.go:184 +0x5e5
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]: k8s.io/kubernetes/pkg/kubelet/config.(*sourceFile).listConfig(0xc0008d9400, 0x0, 0x0)
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/config/file.go:135 +0x20d
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]: k8s.io/kubernetes/pkg/kubelet/config.(*sourceFile).run.func1(0xc0008d9400, 0xc0008d9450)
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/config/file.go:97 +0x45
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]: created by k8s.io/kubernetes/pkg/kubelet/config.(*sourceFile).run
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/config/file.go:95 +0x5b
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]: goroutine 146 [select]:
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]: k8s.io/kubernetes/pkg/kubelet/config.(*sourceFile).doWatch(0xc0008d9400, 0x0, 0x0)
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/config/file_linux.go:91 +0x3c5
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]: k8s.io/kubernetes/pkg/kubelet/config.(*sourceFile).startWatch.func1()
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/config/file_linux.go:59 +0xb6
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0009ea390)
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009ea390, 0x4f0ac40, 0xc0008bf050, 0x1, 0xc00009e0c0)
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009ea390, 0x3b9aca00, 0x0, 0x1, 0xc00009e0c0)
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	Jan 27 16:01:13 old-k8s-version-405706 kubelet[7081]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405706 -n old-k8s-version-405706
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-405706 -n old-k8s-version-405706: exit status 2 (264.292016ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-405706" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (341.70s)

                                                
                                    

Test pass (252/308)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.31
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.1/json-events 5.42
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.07
18 TestDownloadOnly/v1.32.1/DeleteAll 0.14
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.62
22 TestOffline 59.16
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 200.01
31 TestAddons/serial/GCPAuth/Namespaces 1.95
32 TestAddons/serial/GCPAuth/FakeCredentials 8.54
35 TestAddons/parallel/Registry 67.17
37 TestAddons/parallel/InspektorGadget 11.79
38 TestAddons/parallel/MetricsServer 7.2
41 TestAddons/parallel/Headlamp 38.94
42 TestAddons/parallel/CloudSpanner 5.57
44 TestAddons/parallel/NvidiaDevicePlugin 6.56
45 TestAddons/parallel/Yakd 11.78
47 TestAddons/StoppedEnableDisable 91.26
48 TestCertOptions 88.14
49 TestCertExpiration 328.73
51 TestForceSystemdFlag 49.01
52 TestForceSystemdEnv 45.33
54 TestKVMDriverInstallOrUpdate 3.42
58 TestErrorSpam/setup 43.6
59 TestErrorSpam/start 0.37
60 TestErrorSpam/status 0.76
61 TestErrorSpam/pause 1.64
62 TestErrorSpam/unpause 1.75
63 TestErrorSpam/stop 5.2
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 53.48
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 41.5
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.66
75 TestFunctional/serial/CacheCmd/cache/add_local 1.49
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.78
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 35.12
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.48
86 TestFunctional/serial/LogsFileCmd 1.5
87 TestFunctional/serial/InvalidService 3.94
89 TestFunctional/parallel/ConfigCmd 0.38
90 TestFunctional/parallel/DashboardCmd 98.29
91 TestFunctional/parallel/DryRun 0.28
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 1.14
97 TestFunctional/parallel/ServiceCmdConnect 9.55
98 TestFunctional/parallel/AddonsCmd 0.16
101 TestFunctional/parallel/SSHCmd 0.49
102 TestFunctional/parallel/CpCmd 1.45
104 TestFunctional/parallel/FileSync 0.26
105 TestFunctional/parallel/CertSync 1.51
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
113 TestFunctional/parallel/License 0.18
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 0.49
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
120 TestFunctional/parallel/ImageCommands/ImageBuild 2.45
121 TestFunctional/parallel/ImageCommands/Setup 3.01
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
125 TestFunctional/parallel/ServiceCmd/DeployApp 11.2
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.43
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.33
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.95
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
143 TestFunctional/parallel/ProfileCmd/profile_list 0.33
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
145 TestFunctional/parallel/ServiceCmd/List 0.31
146 TestFunctional/parallel/MountCmd/any-port 30.85
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.26
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
149 TestFunctional/parallel/ServiceCmd/Format 0.31
150 TestFunctional/parallel/ServiceCmd/URL 0.33
151 TestFunctional/parallel/MountCmd/specific-port 1.94
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.37
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 232.95
160 TestMultiControlPlane/serial/DeployApp 5.28
161 TestMultiControlPlane/serial/PingHostFromPods 1.26
162 TestMultiControlPlane/serial/AddWorkerNode 55.21
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
165 TestMultiControlPlane/serial/CopyFile 13.2
166 TestMultiControlPlane/serial/StopSecondaryNode 91.47
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
168 TestMultiControlPlane/serial/RestartSecondaryNode 62.19
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 428.53
171 TestMultiControlPlane/serial/DeleteSecondaryNode 18.32
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
173 TestMultiControlPlane/serial/StopCluster 272.75
174 TestMultiControlPlane/serial/RestartCluster 131.84
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
176 TestMultiControlPlane/serial/AddSecondaryNode 113.99
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.87
181 TestJSONOutput/start/Command 59.85
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.69
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.63
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.38
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 94.73
213 TestMountStart/serial/StartWithMountFirst 24.92
214 TestMountStart/serial/VerifyMountFirst 0.38
215 TestMountStart/serial/StartWithMountSecond 31.98
216 TestMountStart/serial/VerifyMountSecond 0.38
217 TestMountStart/serial/DeleteFirst 1.14
218 TestMountStart/serial/VerifyMountPostDelete 0.39
219 TestMountStart/serial/Stop 1.28
220 TestMountStart/serial/RestartStopped 23.81
221 TestMountStart/serial/VerifyMountPostStop 0.39
224 TestMultiNode/serial/FreshStart2Nodes 110.5
225 TestMultiNode/serial/DeployApp2Nodes 4.23
226 TestMultiNode/serial/PingHostFrom2Pods 0.81
227 TestMultiNode/serial/AddNode 50.57
228 TestMultiNode/serial/MultiNodeLabels 0.07
229 TestMultiNode/serial/ProfileList 0.59
230 TestMultiNode/serial/CopyFile 7.52
231 TestMultiNode/serial/StopNode 2.49
232 TestMultiNode/serial/StartAfterStop 44.74
233 TestMultiNode/serial/RestartKeepsNodes 330.74
234 TestMultiNode/serial/DeleteNode 2.83
235 TestMultiNode/serial/StopMultiNode 181.89
236 TestMultiNode/serial/RestartMultiNode 102.17
237 TestMultiNode/serial/ValidateNameConflict 42.82
244 TestScheduledStopUnix 114.68
248 TestRunningBinaryUpgrade 200.19
254 TestStoppedBinaryUpgrade/Setup 0.71
256 TestStoppedBinaryUpgrade/Upgrade 169.62
261 TestNetworkPlugins/group/false 3.58
273 TestPause/serial/Start 102.43
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.09
277 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
278 TestNoKubernetes/serial/StartWithK8s 75.59
279 TestNetworkPlugins/group/auto/Start 83.39
280 TestNoKubernetes/serial/StartWithStopK8s 36.8
281 TestNoKubernetes/serial/Start 30.32
282 TestNetworkPlugins/group/auto/KubeletFlags 0.2
283 TestNetworkPlugins/group/auto/NetCatPod 10.22
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
285 TestNoKubernetes/serial/ProfileList 13.79
286 TestNetworkPlugins/group/auto/DNS 0.15
287 TestNetworkPlugins/group/auto/Localhost 0.12
288 TestNetworkPlugins/group/auto/HairPin 0.12
289 TestNoKubernetes/serial/Stop 1.48
290 TestNoKubernetes/serial/StartNoArgs 23.64
291 TestNetworkPlugins/group/kindnet/Start 82.37
292 TestNetworkPlugins/group/calico/Start 124.91
293 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
294 TestNetworkPlugins/group/custom-flannel/Start 126.82
295 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
296 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
297 TestNetworkPlugins/group/kindnet/NetCatPod 10.27
298 TestNetworkPlugins/group/kindnet/DNS 0.18
299 TestNetworkPlugins/group/kindnet/Localhost 0.15
300 TestNetworkPlugins/group/kindnet/HairPin 0.16
301 TestNetworkPlugins/group/enable-default-cni/Start 60.88
302 TestNetworkPlugins/group/calico/ControllerPod 6.01
303 TestNetworkPlugins/group/flannel/Start 86.97
304 TestNetworkPlugins/group/calico/KubeletFlags 0.22
305 TestNetworkPlugins/group/calico/NetCatPod 11.24
306 TestNetworkPlugins/group/calico/DNS 0.17
307 TestNetworkPlugins/group/calico/Localhost 0.14
308 TestNetworkPlugins/group/calico/HairPin 0.15
309 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
310 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.3
311 TestNetworkPlugins/group/custom-flannel/DNS 0.17
312 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
313 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
314 TestNetworkPlugins/group/bridge/Start 73.29
315 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
316 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.3
319 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
320 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
321 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
323 TestStartStop/group/no-preload/serial/FirstStart 91.71
324 TestNetworkPlugins/group/flannel/ControllerPod 6.01
325 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
326 TestNetworkPlugins/group/flannel/NetCatPod 10.52
327 TestNetworkPlugins/group/flannel/DNS 0.24
328 TestNetworkPlugins/group/flannel/Localhost 0.13
329 TestNetworkPlugins/group/flannel/HairPin 0.14
330 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
331 TestNetworkPlugins/group/bridge/NetCatPod 12.31
332 TestNetworkPlugins/group/bridge/DNS 0.18
333 TestNetworkPlugins/group/bridge/Localhost 0.12
334 TestNetworkPlugins/group/bridge/HairPin 0.11
336 TestStartStop/group/embed-certs/serial/FirstStart 66.74
338 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 74
339 TestStartStop/group/no-preload/serial/DeployApp 9.31
340 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
341 TestStartStop/group/no-preload/serial/Stop 91.04
342 TestStartStop/group/embed-certs/serial/DeployApp 9.31
343 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
344 TestStartStop/group/embed-certs/serial/Stop 91.07
345 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
346 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.04
347 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.05
348 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
350 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
356 TestStartStop/group/old-k8s-version/serial/Stop 5.31
357 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
362 TestStartStop/group/newest-cni/serial/FirstStart 52.39
363 TestStartStop/group/newest-cni/serial/DeployApp 0
364 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.17
365 TestStartStop/group/newest-cni/serial/Stop 10.55
366 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
367 TestStartStop/group/newest-cni/serial/SecondStart 40.76
368 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
369 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
370 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
371 TestStartStop/group/newest-cni/serial/Pause 4.52
x
+
TestDownloadOnly/v1.20.0/json-events (8.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-671066 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-671066 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.313362277s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0127 14:05:36.683519 1012816 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0127 14:05:36.683687 1012816 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-671066
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-671066: exit status 85 (66.959096ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-671066 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |          |
	|         | -p download-only-671066        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:05:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:05:28.414526 1012828 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:05:28.414640 1012828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:05:28.414645 1012828 out.go:358] Setting ErrFile to fd 2...
	I0127 14:05:28.414649 1012828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:05:28.414869 1012828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	W0127 14:05:28.415012 1012828 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20321-1005652/.minikube/config/config.json: open /home/jenkins/minikube-integration/20321-1005652/.minikube/config/config.json: no such file or directory
	I0127 14:05:28.415647 1012828 out.go:352] Setting JSON to true
	I0127 14:05:28.416677 1012828 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":17275,"bootTime":1737969453,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:05:28.416812 1012828 start.go:139] virtualization: kvm guest
	I0127 14:05:28.419435 1012828 out.go:97] [download-only-671066] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0127 14:05:28.419559 1012828 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 14:05:28.419596 1012828 notify.go:220] Checking for updates...
	I0127 14:05:28.421082 1012828 out.go:169] MINIKUBE_LOCATION=20321
	I0127 14:05:28.422590 1012828 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:05:28.423976 1012828 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 14:05:28.425286 1012828 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 14:05:28.426667 1012828 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 14:05:28.429293 1012828 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 14:05:28.429545 1012828 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:05:28.467164 1012828 out.go:97] Using the kvm2 driver based on user configuration
	I0127 14:05:28.467195 1012828 start.go:297] selected driver: kvm2
	I0127 14:05:28.467208 1012828 start.go:901] validating driver "kvm2" against <nil>
	I0127 14:05:28.467719 1012828 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:05:28.467837 1012828 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:05:28.484819 1012828 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:05:28.484872 1012828 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 14:05:28.485462 1012828 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 14:05:28.485614 1012828 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 14:05:28.485647 1012828 cni.go:84] Creating CNI manager for ""
	I0127 14:05:28.485723 1012828 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:05:28.485735 1012828 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 14:05:28.485809 1012828 start.go:340] cluster config:
	{Name:download-only-671066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-671066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:05:28.486087 1012828 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:05:28.487895 1012828 out.go:97] Downloading VM boot image ...
	I0127 14:05:28.487937 1012828 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 14:05:31.155222 1012828 out.go:97] Starting "download-only-671066" primary control-plane node in "download-only-671066" cluster
	I0127 14:05:31.155247 1012828 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 14:05:31.185414 1012828 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 14:05:31.185453 1012828 cache.go:56] Caching tarball of preloaded images
	I0127 14:05:31.185658 1012828 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 14:05:31.187608 1012828 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0127 14:05:31.187633 1012828 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0127 14:05:31.216536 1012828 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-671066 host does not exist
	  To start a cluster, run: "minikube start -p download-only-671066"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-671066
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (5.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-223205 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-223205 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.417410028s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (5.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0127 14:05:42.453211 1012816 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0127 14:05:42.453279 1012816 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-223205
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-223205: exit status 85 (65.52996ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-671066 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | -p download-only-671066        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| delete  | -p download-only-671066        | download-only-671066 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC | 27 Jan 25 14:05 UTC |
	| start   | -o=json --download-only        | download-only-223205 | jenkins | v1.35.0 | 27 Jan 25 14:05 UTC |                     |
	|         | -p download-only-223205        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:05:37
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:05:37.081247 1013027 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:05:37.081366 1013027 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:05:37.081376 1013027 out.go:358] Setting ErrFile to fd 2...
	I0127 14:05:37.081380 1013027 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:05:37.081583 1013027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 14:05:37.082154 1013027 out.go:352] Setting JSON to true
	I0127 14:05:37.083131 1013027 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":17284,"bootTime":1737969453,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:05:37.083235 1013027 start.go:139] virtualization: kvm guest
	I0127 14:05:37.085211 1013027 out.go:97] [download-only-223205] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:05:37.085365 1013027 notify.go:220] Checking for updates...
	I0127 14:05:37.086783 1013027 out.go:169] MINIKUBE_LOCATION=20321
	I0127 14:05:37.088360 1013027 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:05:37.089606 1013027 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 14:05:37.090874 1013027 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 14:05:37.092067 1013027 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 14:05:37.094451 1013027 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 14:05:37.094658 1013027 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:05:37.126854 1013027 out.go:97] Using the kvm2 driver based on user configuration
	I0127 14:05:37.126895 1013027 start.go:297] selected driver: kvm2
	I0127 14:05:37.126902 1013027 start.go:901] validating driver "kvm2" against <nil>
	I0127 14:05:37.127287 1013027 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:05:37.127420 1013027 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-1005652/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:05:37.143121 1013027 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:05:37.143185 1013027 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 14:05:37.143701 1013027 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 14:05:37.143835 1013027 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 14:05:37.143863 1013027 cni.go:84] Creating CNI manager for ""
	I0127 14:05:37.143912 1013027 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 14:05:37.143923 1013027 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 14:05:37.143966 1013027 start.go:340] cluster config:
	{Name:download-only-223205 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-223205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:05:37.144089 1013027 iso.go:125] acquiring lock: {Name:mk09983d13fb1a3582857ab934539ca4709ad90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:05:37.146037 1013027 out.go:97] Starting "download-only-223205" primary control-plane node in "download-only-223205" cluster
	I0127 14:05:37.146063 1013027 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:05:37.180285 1013027 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 14:05:37.180319 1013027 cache.go:56] Caching tarball of preloaded images
	I0127 14:05:37.180491 1013027 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:05:37.182390 1013027 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0127 14:05:37.182420 1013027 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0127 14:05:37.215861 1013027 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2af56a340efcc3949401b47b9a5d537 -> /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 14:05:40.983529 1013027 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0127 14:05:40.983629 1013027 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0127 14:05:41.760456 1013027 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 14:05:41.760865 1013027 profile.go:143] Saving config to /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/download-only-223205/config.json ...
	I0127 14:05:41.760901 1013027 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/download-only-223205/config.json: {Name:mk2e2b30346a1766f745c319d9408ae0a600c735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 14:05:41.761106 1013027 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 14:05:41.761274 1013027 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20321-1005652/.minikube/cache/linux/amd64/v1.32.1/kubectl
	
	
	* The control-plane node download-only-223205 host does not exist
	  To start a cluster, run: "minikube start -p download-only-223205"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-223205
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I0127 14:05:43.061708 1012816 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-105715 --alsologtostderr --binary-mirror http://127.0.0.1:46267 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-105715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-105715
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (59.16s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-845871 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-845871 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (57.746584063s)
helpers_test.go:175: Cleaning up "offline-crio-845871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-845871
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-845871: (1.414476876s)
--- PASS: TestOffline (59.16s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-097644
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-097644: exit status 85 (53.130488ms)

                                                
                                                
-- stdout --
	* Profile "addons-097644" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-097644"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-097644
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-097644: exit status 85 (55.805242ms)

                                                
                                                
-- stdout --
	* Profile "addons-097644" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-097644"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (200.01s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-097644 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-097644 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m20.00688613s)
--- PASS: TestAddons/Setup (200.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (1.95s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-097644 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-097644 get secret gcp-auth -n new-namespace
addons_test.go:583: (dbg) Non-zero exit: kubectl --context addons-097644 get secret gcp-auth -n new-namespace: exit status 1 (80.758471ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:575: (dbg) Run:  kubectl --context addons-097644 logs -l app=gcp-auth -n gcp-auth
I0127 14:09:04.293661 1012816 retry.go:31] will retry after 1.656098501s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2025/01/27 14:09:03 GCP Auth Webhook started!
	2025/01/27 14:09:04 Ready to marshal response ...
	2025/01/27 14:09:04 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:583: (dbg) Run:  kubectl --context addons-097644 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (1.95s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-097644 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-097644 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d0467110-2a34-4ee9-a43d-ff359ed55ac8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d0467110-2a34-4ee9-a43d-ff359ed55ac8] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004272441s
addons_test.go:633: (dbg) Run:  kubectl --context addons-097644 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-097644 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-097644 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (67.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.205638ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-gs69t" [56ae8219-917b-43a3-8b3a-9965b018d7ae] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003531047s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-68qft" [fcd36f1c-2ee6-49df-985c-78afd0b91e4b] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003864205s
addons_test.go:331: (dbg) Run:  kubectl --context addons-097644 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-097644 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-097644 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (56.365026695s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-097644 ip
2025/01/27 14:10:29 [DEBUG] GET http://192.168.39.228:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-097644 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (67.17s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-twv72" [630bd632-3bc4-48d7-82d7-3faa27b88b0c] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004438436s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-097644 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-097644 addons disable inspektor-gadget --alsologtostderr -v=1: (5.788987803s)
--- PASS: TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.2s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 5.924546ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-dr2kc" [d5f1b090-54ae-4efb-ade0-56f8442d821c] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004010893s
addons_test.go:402: (dbg) Run:  kubectl --context addons-097644 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-097644 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-097644 addons disable metrics-server --alsologtostderr -v=1: (1.110540696s)
--- PASS: TestAddons/parallel/MetricsServer (7.20s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (38.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-097644 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-5gsf4" [30c05f05-0975-45e9-9246-00bcb39ebcd0] Pending
helpers_test.go:344: "headlamp-69d78d796f-5gsf4" [30c05f05-0975-45e9-9246-00bcb39ebcd0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-5gsf4" [30c05f05-0975-45e9-9246-00bcb39ebcd0] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 32.00450316s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-097644 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-097644 addons disable headlamp --alsologtostderr -v=1: (5.956217512s)
--- PASS: TestAddons/parallel/Headlamp (38.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-6nqlx" [1603ff50-0482-4e28-b501-555935fc91c3] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003618457s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-097644 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-bs6d4" [157addb8-6c2f-41d6-9d57-8ff984241b50] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00384284s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-097644 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-8w9x6" [fb942ae3-765e-4fd7-b4d7-69ae263a28e3] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004971149s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-097644 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-097644 addons disable yakd --alsologtostderr -v=1: (5.774661542s)
--- PASS: TestAddons/parallel/Yakd (11.78s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-097644
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-097644: (1m30.965642227s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-097644
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-097644
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-097644
--- PASS: TestAddons/StoppedEnableDisable (91.26s)

                                                
                                    
x
+
TestCertOptions (88.14s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-612887 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-612887 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m26.662195587s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-612887 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-612887 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-612887 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-612887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-612887
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-612887: (1.012388082s)
--- PASS: TestCertOptions (88.14s)

                                                
                                    
x
+
TestCertExpiration (328.73s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-445777 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-445777 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m0.581155829s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-445777 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0127 15:29:06.239033 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-445777 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m27.253381986s)
helpers_test.go:175: Cleaning up "cert-expiration-445777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-445777
--- PASS: TestCertExpiration (328.73s)

                                                
                                    
x
+
TestForceSystemdFlag (49.01s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-937953 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0127 15:24:06.238925 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-937953 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (47.92125749s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-937953 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-937953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-937953
--- PASS: TestForceSystemdFlag (49.01s)

                                                
                                    
x
+
TestForceSystemdEnv (45.33s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-766957 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-766957 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.510839127s)
helpers_test.go:175: Cleaning up "force-systemd-env-766957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-766957
--- PASS: TestForceSystemdEnv (45.33s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.42s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0127 15:24:39.782937 1012816 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 15:24:39.783124 1012816 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0127 15:24:39.818149 1012816 install.go:62] docker-machine-driver-kvm2: exit status 1
W0127 15:24:39.818568 1012816 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 15:24:39.818652 1012816 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2976578892/001/docker-machine-driver-kvm2
I0127 15:24:40.073553 1012816 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2976578892/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc0005f4c30 gz:0xc0005f4c38 tar:0xc0005f4ba0 tar.bz2:0xc0005f4bb0 tar.gz:0xc0005f4bc0 tar.xz:0xc0005f4bf0 tar.zst:0xc0005f4c20 tbz2:0xc0005f4bb0 tgz:0xc0005f4bc0 txz:0xc0005f4bf0 tzst:0xc0005f4c20 xz:0xc0005f4c50 zip:0xc0005f4c60 zst:0xc0005f4c58] Getters:map[file:0xc0008df2d0 http:0xc000bb6190 https:0xc000bb61e0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0127 15:24:40.073610 1012816 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2976578892/001/docker-machine-driver-kvm2
I0127 15:24:41.708438 1012816 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 15:24:41.708530 1012816 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 15:24:41.739644 1012816 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0127 15:24:41.739679 1012816 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0127 15:24:41.739746 1012816 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 15:24:41.739773 1012816 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2976578892/002/docker-machine-driver-kvm2
I0127 15:24:41.902565 1012816 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2976578892/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc0005f4c30 gz:0xc0005f4c38 tar:0xc0005f4ba0 tar.bz2:0xc0005f4bb0 tar.gz:0xc0005f4bc0 tar.xz:0xc0005f4bf0 tar.zst:0xc0005f4c20 tbz2:0xc0005f4bb0 tgz:0xc0005f4bc0 txz:0xc0005f4bf0 tzst:0xc0005f4c20 xz:0xc0005f4c50 zip:0xc0005f4c60 zst:0xc0005f4c58] Getters:map[file:0xc001bce750 http:0xc000bb7720 https:0xc000bb7770] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0127 15:24:41.902613 1012816 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2976578892/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.42s)

                                                
                                    
x
+
TestErrorSpam/setup (43.6s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-894492 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-894492 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-894492 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-894492 --driver=kvm2  --container-runtime=crio: (43.597909181s)
--- PASS: TestErrorSpam/setup (43.60s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-894492 --log_dir /tmp/nospam-894492 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-894492 --log_dir /tmp/nospam-894492 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-894492 --log_dir /tmp/nospam-894492 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-894492 --log_dir /tmp/nospam-894492 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-894492 --log_dir /tmp/nospam-894492 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-894492 --log_dir /tmp/nospam-894492 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-894492 --log_dir /tmp/nospam-894492 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-894492 --log_dir /tmp/nospam-894492 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-894492 --log_dir /tmp/nospam-894492 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-894492 --log_dir /tmp/nospam-894492 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-894492 --log_dir /tmp/nospam-894492 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-894492 --log_dir /tmp/nospam-894492 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (5.2s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-894492 --log_dir /tmp/nospam-894492 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-894492 --log_dir /tmp/nospam-894492 stop: (2.328875326s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-894492 --log_dir /tmp/nospam-894492 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-894492 --log_dir /tmp/nospam-894492 stop: (1.654040604s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-894492 --log_dir /tmp/nospam-894492 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-894492 --log_dir /tmp/nospam-894492 stop: (1.213018602s)
--- PASS: TestErrorSpam/stop (5.20s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20321-1005652/.minikube/files/etc/test/nested/copy/1012816/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (53.48s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-354053 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-354053 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (53.475272774s)
--- PASS: TestFunctional/serial/StartWithProxy (53.48s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.5s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0127 14:21:44.936666 1012816 config.go:182] Loaded profile config "functional-354053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-354053 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-354053 --alsologtostderr -v=8: (41.502334493s)
functional_test.go:663: soft start took 41.503243292s for "functional-354053" cluster.
I0127 14:22:26.439466 1012816 config.go:182] Loaded profile config "functional-354053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (41.50s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-354053 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-354053 cache add registry.k8s.io/pause:3.1: (1.143987979s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-354053 cache add registry.k8s.io/pause:3.3: (1.343088934s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-354053 cache add registry.k8s.io/pause:latest: (1.172717876s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-354053 /tmp/TestFunctionalserialCacheCmdcacheadd_local2066984751/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 cache add minikube-local-cache-test:functional-354053
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-354053 cache add minikube-local-cache-test:functional-354053: (1.157156039s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 cache delete minikube-local-cache-test:functional-354053
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-354053
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-354053 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (227.103494ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-354053 cache reload: (1.045142948s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 kubectl -- --context functional-354053 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-354053 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-354053 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-354053 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.119158076s)
functional_test.go:761: restart took 35.11926751s for "functional-354053" cluster.
I0127 14:23:09.272721 1012816 config.go:182] Loaded profile config "functional-354053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (35.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-354053 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-354053 logs: (1.4831446s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 logs --file /tmp/TestFunctionalserialLogsFileCmd1225867474/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-354053 logs --file /tmp/TestFunctionalserialLogsFileCmd1225867474/001/logs.txt: (1.501957036s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-354053 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-354053
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-354053: exit status 115 (292.43721ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.247:32639 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-354053 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-354053 config get cpus: exit status 14 (64.992367ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-354053 config get cpus: exit status 14 (53.069181ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (98.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-354053 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-354053 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1025815: os: process already finished
E0127 14:25:28.176313 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/DashboardCmd (98.29s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-354053 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-354053 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (142.357417ms)

                                                
                                                
-- stdout --
	* [functional-354053] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20321
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:23:27.750540 1025315 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:23:27.750678 1025315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:23:27.750693 1025315 out.go:358] Setting ErrFile to fd 2...
	I0127 14:23:27.750700 1025315 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:23:27.751200 1025315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 14:23:27.751958 1025315 out.go:352] Setting JSON to false
	I0127 14:23:27.753300 1025315 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":18355,"bootTime":1737969453,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:23:27.753448 1025315 start.go:139] virtualization: kvm guest
	I0127 14:23:27.755715 1025315 out.go:177] * [functional-354053] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:23:27.757127 1025315 notify.go:220] Checking for updates...
	I0127 14:23:27.757147 1025315 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 14:23:27.758493 1025315 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:23:27.759910 1025315 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 14:23:27.761462 1025315 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 14:23:27.762722 1025315 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:23:27.763961 1025315 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:23:27.765658 1025315 config.go:182] Loaded profile config "functional-354053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:23:27.766120 1025315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:23:27.766167 1025315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:23:27.782077 1025315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34293
	I0127 14:23:27.782655 1025315 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:23:27.783283 1025315 main.go:141] libmachine: Using API Version  1
	I0127 14:23:27.783312 1025315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:23:27.783663 1025315 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:23:27.783803 1025315 main.go:141] libmachine: (functional-354053) Calling .DriverName
	I0127 14:23:27.784086 1025315 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:23:27.784389 1025315 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:23:27.784423 1025315 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:23:27.800067 1025315 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38657
	I0127 14:23:27.800562 1025315 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:23:27.801167 1025315 main.go:141] libmachine: Using API Version  1
	I0127 14:23:27.801192 1025315 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:23:27.801528 1025315 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:23:27.801721 1025315 main.go:141] libmachine: (functional-354053) Calling .DriverName
	I0127 14:23:27.835292 1025315 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 14:23:27.836631 1025315 start.go:297] selected driver: kvm2
	I0127 14:23:27.836653 1025315 start.go:901] validating driver "kvm2" against &{Name:functional-354053 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-354053 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:23:27.836760 1025315 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:23:27.839159 1025315 out.go:201] 
	W0127 14:23:27.840496 1025315 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 14:23:27.841858 1025315 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-354053 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-354053 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-354053 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (168.474959ms)

                                                
                                                
-- stdout --
	* [functional-354053] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20321
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:23:29.389880 1025700 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:23:29.390074 1025700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:23:29.390091 1025700 out.go:358] Setting ErrFile to fd 2...
	I0127 14:23:29.390098 1025700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:23:29.390528 1025700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 14:23:29.391414 1025700 out.go:352] Setting JSON to false
	I0127 14:23:29.392962 1025700 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":18356,"bootTime":1737969453,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:23:29.393100 1025700 start.go:139] virtualization: kvm guest
	I0127 14:23:29.395451 1025700 out.go:177] * [functional-354053] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0127 14:23:29.396819 1025700 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 14:23:29.396840 1025700 notify.go:220] Checking for updates...
	I0127 14:23:29.399233 1025700 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:23:29.400589 1025700 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 14:23:29.401838 1025700 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 14:23:29.403108 1025700 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:23:29.404395 1025700 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:23:29.406227 1025700 config.go:182] Loaded profile config "functional-354053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:23:29.406818 1025700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:23:29.406888 1025700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:23:29.423313 1025700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37767
	I0127 14:23:29.423928 1025700 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:23:29.424678 1025700 main.go:141] libmachine: Using API Version  1
	I0127 14:23:29.424706 1025700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:23:29.425141 1025700 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:23:29.425319 1025700 main.go:141] libmachine: (functional-354053) Calling .DriverName
	I0127 14:23:29.425574 1025700 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:23:29.425916 1025700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:23:29.425955 1025700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:23:29.443236 1025700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0127 14:23:29.443854 1025700 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:23:29.444488 1025700 main.go:141] libmachine: Using API Version  1
	I0127 14:23:29.444505 1025700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:23:29.444866 1025700 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:23:29.445060 1025700 main.go:141] libmachine: (functional-354053) Calling .DriverName
	I0127 14:23:29.483470 1025700 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0127 14:23:29.484803 1025700 start.go:297] selected driver: kvm2
	I0127 14:23:29.484822 1025700 start.go:901] validating driver "kvm2" against &{Name:functional-354053 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-354053 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:23:29.484970 1025700 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:23:29.486981 1025700 out.go:201] 
	W0127 14:23:29.488336 1025700 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 14:23:29.489548 1025700 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-354053 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-354053 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-krgns" [545ea35f-d082-4ec6-9395-474eb765ae58] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-krgns" [545ea35f-d082-4ec6-9395-474eb765ae58] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.006059463s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.247:30274
functional_test.go:1675: http://192.168.39.247:30274: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-krgns

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.247:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.247:30274
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh -n functional-354053 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 cp functional-354053:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4292173168/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh -n functional-354053 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh -n functional-354053 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1012816/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "sudo cat /etc/test/nested/copy/1012816/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1012816.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "sudo cat /etc/ssl/certs/1012816.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1012816.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "sudo cat /usr/share/ca-certificates/1012816.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/10128162.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "sudo cat /etc/ssl/certs/10128162.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/10128162.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "sudo cat /usr/share/ca-certificates/10128162.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-354053 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-354053 ssh "sudo systemctl is-active docker": exit status 1 (243.202529ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-354053 ssh "sudo systemctl is-active containerd": exit status 1 (234.357806ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-354053 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-354053
localhost/kicbase/echo-server:functional-354053
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-354053 image ls --format short --alsologtostderr:
I0127 14:24:03.029558 1026477 out.go:345] Setting OutFile to fd 1 ...
I0127 14:24:03.029993 1026477 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:24:03.030662 1026477 out.go:358] Setting ErrFile to fd 2...
I0127 14:24:03.030737 1026477 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:24:03.031278 1026477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
I0127 14:24:03.031905 1026477 config.go:182] Loaded profile config "functional-354053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:24:03.032022 1026477 config.go:182] Loaded profile config "functional-354053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:24:03.032373 1026477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 14:24:03.032421 1026477 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:24:03.048015 1026477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36397
I0127 14:24:03.048568 1026477 main.go:141] libmachine: () Calling .GetVersion
I0127 14:24:03.049229 1026477 main.go:141] libmachine: Using API Version  1
I0127 14:24:03.049260 1026477 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:24:03.049576 1026477 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:24:03.049774 1026477 main.go:141] libmachine: (functional-354053) Calling .GetState
I0127 14:24:03.051598 1026477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 14:24:03.051647 1026477 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:24:03.066962 1026477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36343
I0127 14:24:03.067377 1026477 main.go:141] libmachine: () Calling .GetVersion
I0127 14:24:03.067865 1026477 main.go:141] libmachine: Using API Version  1
I0127 14:24:03.067893 1026477 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:24:03.068299 1026477 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:24:03.068545 1026477 main.go:141] libmachine: (functional-354053) Calling .DriverName
I0127 14:24:03.068775 1026477 ssh_runner.go:195] Run: systemctl --version
I0127 14:24:03.068810 1026477 main.go:141] libmachine: (functional-354053) Calling .GetSSHHostname
I0127 14:24:03.071410 1026477 main.go:141] libmachine: (functional-354053) DBG | domain functional-354053 has defined MAC address 52:54:00:ee:7b:60 in network mk-functional-354053
I0127 14:24:03.071825 1026477 main.go:141] libmachine: (functional-354053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:7b:60", ip: ""} in network mk-functional-354053: {Iface:virbr1 ExpiryTime:2025-01-27 15:21:06 +0000 UTC Type:0 Mac:52:54:00:ee:7b:60 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:functional-354053 Clientid:01:52:54:00:ee:7b:60}
I0127 14:24:03.071855 1026477 main.go:141] libmachine: (functional-354053) DBG | domain functional-354053 has defined IP address 192.168.39.247 and MAC address 52:54:00:ee:7b:60 in network mk-functional-354053
I0127 14:24:03.071972 1026477 main.go:141] libmachine: (functional-354053) Calling .GetSSHPort
I0127 14:24:03.072156 1026477 main.go:141] libmachine: (functional-354053) Calling .GetSSHKeyPath
I0127 14:24:03.072308 1026477 main.go:141] libmachine: (functional-354053) Calling .GetSSHUsername
I0127 14:24:03.072430 1026477 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/functional-354053/id_rsa Username:docker}
I0127 14:24:03.152087 1026477 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 14:24:03.189649 1026477 main.go:141] libmachine: Making call to close driver server
I0127 14:24:03.189669 1026477 main.go:141] libmachine: (functional-354053) Calling .Close
I0127 14:24:03.190085 1026477 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:24:03.190112 1026477 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:24:03.190120 1026477 main.go:141] libmachine: Making call to close driver server
I0127 14:24:03.190128 1026477 main.go:141] libmachine: (functional-354053) Calling .Close
I0127 14:24:03.190149 1026477 main.go:141] libmachine: (functional-354053) DBG | Closing plugin on server side
I0127 14:24:03.190409 1026477 main.go:141] libmachine: (functional-354053) DBG | Closing plugin on server side
I0127 14:24:03.190412 1026477 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:24:03.190439 1026477 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 image ls --format table --alsologtostderr
E0127 14:24:06.238249 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:24:06.244696 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:24:06.256157 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:24:06.277898 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:24:06.319416 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-354053 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/kicbase/echo-server           | functional-354053  | 9056ab77afb8e | 4.94MB |
| localhost/my-image                      | functional-354053  | 44fd39deecd68 | 1.47MB |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 019ee182b58e2 | 90.8MB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e29f9c7391fd9 | 95.3MB |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 95c0bda56fc4d | 98.1MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| localhost/minikube-local-cache-test     | functional-354053  | 20b1693f4c02d | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.32.1            | 2b0d6572d062c | 70.6MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-354053 image ls --format table --alsologtostderr:
I0127 14:24:06.135253 1026645 out.go:345] Setting OutFile to fd 1 ...
I0127 14:24:06.135384 1026645 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:24:06.135394 1026645 out.go:358] Setting ErrFile to fd 2...
I0127 14:24:06.135398 1026645 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:24:06.135588 1026645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
I0127 14:24:06.136222 1026645 config.go:182] Loaded profile config "functional-354053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:24:06.136321 1026645 config.go:182] Loaded profile config "functional-354053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:24:06.136708 1026645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 14:24:06.136767 1026645 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:24:06.152787 1026645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40153
I0127 14:24:06.153390 1026645 main.go:141] libmachine: () Calling .GetVersion
I0127 14:24:06.154023 1026645 main.go:141] libmachine: Using API Version  1
I0127 14:24:06.154051 1026645 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:24:06.154391 1026645 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:24:06.154596 1026645 main.go:141] libmachine: (functional-354053) Calling .GetState
I0127 14:24:06.156885 1026645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 14:24:06.156935 1026645 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:24:06.172948 1026645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32921
I0127 14:24:06.173449 1026645 main.go:141] libmachine: () Calling .GetVersion
I0127 14:24:06.173925 1026645 main.go:141] libmachine: Using API Version  1
I0127 14:24:06.173943 1026645 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:24:06.174271 1026645 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:24:06.174483 1026645 main.go:141] libmachine: (functional-354053) Calling .DriverName
I0127 14:24:06.174687 1026645 ssh_runner.go:195] Run: systemctl --version
I0127 14:24:06.174713 1026645 main.go:141] libmachine: (functional-354053) Calling .GetSSHHostname
I0127 14:24:06.177749 1026645 main.go:141] libmachine: (functional-354053) DBG | domain functional-354053 has defined MAC address 52:54:00:ee:7b:60 in network mk-functional-354053
I0127 14:24:06.178171 1026645 main.go:141] libmachine: (functional-354053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:7b:60", ip: ""} in network mk-functional-354053: {Iface:virbr1 ExpiryTime:2025-01-27 15:21:06 +0000 UTC Type:0 Mac:52:54:00:ee:7b:60 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:functional-354053 Clientid:01:52:54:00:ee:7b:60}
I0127 14:24:06.178204 1026645 main.go:141] libmachine: (functional-354053) DBG | domain functional-354053 has defined IP address 192.168.39.247 and MAC address 52:54:00:ee:7b:60 in network mk-functional-354053
I0127 14:24:06.178325 1026645 main.go:141] libmachine: (functional-354053) Calling .GetSSHPort
I0127 14:24:06.178509 1026645 main.go:141] libmachine: (functional-354053) Calling .GetSSHKeyPath
I0127 14:24:06.178691 1026645 main.go:141] libmachine: (functional-354053) Calling .GetSSHUsername
I0127 14:24:06.178845 1026645 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/functional-354053/id_rsa Username:docker}
I0127 14:24:06.259558 1026645 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 14:24:06.309987 1026645 main.go:141] libmachine: Making call to close driver server
I0127 14:24:06.310005 1026645 main.go:141] libmachine: (functional-354053) Calling .Close
I0127 14:24:06.310327 1026645 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:24:06.310367 1026645 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:24:06.310343 1026645 main.go:141] libmachine: (functional-354053) DBG | Closing plugin on server side
I0127 14:24:06.310388 1026645 main.go:141] libmachine: Making call to close driver server
I0127 14:24:06.310401 1026645 main.go:141] libmachine: (functional-354053) Calling .Close
I0127 14:24:06.310640 1026645 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:24:06.310671 1026645 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:24:06.310681 1026645 main.go:141] libmachine: (functional-354053) DBG | Closing plugin on server side
E0127 14:24:06.400912 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:24:06.562535 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:24:06.883794 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:24:07.525239 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:24:08.807288 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:24:11.368649 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:24:16.490235 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:24:26.732318 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:24:47.214229 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
2025/01/27 14:25:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-354053 image ls --format json --alsologtostderr:
[{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"98051552"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8
s.io/echoserver:1.8"],"size":"97846543"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e","registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"70649158"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"
20b1693f4c02d9c6ff9a2bdd75116b8e2026bac1e98ed2560a95b44f20b4e92e","repoDigests":["localhost/minikube-local-cache-test@sha256:e21fbb4bb80f32d0982a9d46738e35f04b66643a83ae9daf4da6a65fcaea4925"],"repoTags":["localhost/minikube-local-cache-test:functional-354053"],"size":"3330"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954","registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"90793286"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"
95271321"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"5dc4ef010e526e942140060d03deaa6a6e1c03ced8b482b53b4a78f50e1e3997","repoDigests":["docker.io/library/31dcff464907214d41cecc9ac3445dc95ae85fd9a485f0
edeca3c76281b964e6-tmp@sha256:2692ce7569cf7d81a1a61e867ae757f2ec48dbc205ef47f41d266d05a9e7bc68"],"repoTags":[],"size":"1466018"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-354053"],"size":"4943877"},{"id":"44fd39deecd68cc2d35b57f422f950ae673c09569904221e888fdd9f8f0d752b","repoDigests":["localhost/my-image@sha256:820fc157f1ad80b2bdc368b17933ea414beb531b221f98ddbf5824d1a8c1fb03"],"repoTags":["localhost/my-image:functional-354053"],"size":"1468600"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-354053 image ls --format json --alsologtostderr:
I0127 14:24:05.917110 1026621 out.go:345] Setting OutFile to fd 1 ...
I0127 14:24:05.917214 1026621 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:24:05.917222 1026621 out.go:358] Setting ErrFile to fd 2...
I0127 14:24:05.917226 1026621 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:24:05.917443 1026621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
I0127 14:24:05.918069 1026621 config.go:182] Loaded profile config "functional-354053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:24:05.918176 1026621 config.go:182] Loaded profile config "functional-354053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:24:05.918536 1026621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 14:24:05.918584 1026621 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:24:05.934479 1026621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38865
I0127 14:24:05.934981 1026621 main.go:141] libmachine: () Calling .GetVersion
I0127 14:24:05.935568 1026621 main.go:141] libmachine: Using API Version  1
I0127 14:24:05.935598 1026621 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:24:05.935947 1026621 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:24:05.936174 1026621 main.go:141] libmachine: (functional-354053) Calling .GetState
I0127 14:24:05.938053 1026621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 14:24:05.938108 1026621 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:24:05.953125 1026621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37221
I0127 14:24:05.953624 1026621 main.go:141] libmachine: () Calling .GetVersion
I0127 14:24:05.954212 1026621 main.go:141] libmachine: Using API Version  1
I0127 14:24:05.954248 1026621 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:24:05.954591 1026621 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:24:05.954806 1026621 main.go:141] libmachine: (functional-354053) Calling .DriverName
I0127 14:24:05.955022 1026621 ssh_runner.go:195] Run: systemctl --version
I0127 14:24:05.955052 1026621 main.go:141] libmachine: (functional-354053) Calling .GetSSHHostname
I0127 14:24:05.957840 1026621 main.go:141] libmachine: (functional-354053) DBG | domain functional-354053 has defined MAC address 52:54:00:ee:7b:60 in network mk-functional-354053
I0127 14:24:05.958287 1026621 main.go:141] libmachine: (functional-354053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:7b:60", ip: ""} in network mk-functional-354053: {Iface:virbr1 ExpiryTime:2025-01-27 15:21:06 +0000 UTC Type:0 Mac:52:54:00:ee:7b:60 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:functional-354053 Clientid:01:52:54:00:ee:7b:60}
I0127 14:24:05.958317 1026621 main.go:141] libmachine: (functional-354053) DBG | domain functional-354053 has defined IP address 192.168.39.247 and MAC address 52:54:00:ee:7b:60 in network mk-functional-354053
I0127 14:24:05.958460 1026621 main.go:141] libmachine: (functional-354053) Calling .GetSSHPort
I0127 14:24:05.958639 1026621 main.go:141] libmachine: (functional-354053) Calling .GetSSHKeyPath
I0127 14:24:05.958884 1026621 main.go:141] libmachine: (functional-354053) Calling .GetSSHUsername
I0127 14:24:05.959095 1026621 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/functional-354053/id_rsa Username:docker}
I0127 14:24:06.039689 1026621 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 14:24:06.077193 1026621 main.go:141] libmachine: Making call to close driver server
I0127 14:24:06.077212 1026621 main.go:141] libmachine: (functional-354053) Calling .Close
I0127 14:24:06.077553 1026621 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:24:06.077565 1026621 main.go:141] libmachine: (functional-354053) DBG | Closing plugin on server side
I0127 14:24:06.077584 1026621 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:24:06.077615 1026621 main.go:141] libmachine: Making call to close driver server
I0127 14:24:06.077628 1026621 main.go:141] libmachine: (functional-354053) Calling .Close
I0127 14:24:06.077878 1026621 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:24:06.077896 1026621 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:24:06.077920 1026621 main.go:141] libmachine: (functional-354053) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-354053 image ls --format yaml --alsologtostderr:
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-354053
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
- registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "90793286"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "95271321"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 20b1693f4c02d9c6ff9a2bdd75116b8e2026bac1e98ed2560a95b44f20b4e92e
repoDigests:
- localhost/minikube-local-cache-test@sha256:e21fbb4bb80f32d0982a9d46738e35f04b66643a83ae9daf4da6a65fcaea4925
repoTags:
- localhost/minikube-local-cache-test:functional-354053
size: "3330"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "98051552"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
- registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "70649158"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-354053 image ls --format yaml --alsologtostderr:
I0127 14:24:03.244005 1026501 out.go:345] Setting OutFile to fd 1 ...
I0127 14:24:03.244124 1026501 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:24:03.244133 1026501 out.go:358] Setting ErrFile to fd 2...
I0127 14:24:03.244137 1026501 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:24:03.244948 1026501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
I0127 14:24:03.246202 1026501 config.go:182] Loaded profile config "functional-354053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:24:03.246324 1026501 config.go:182] Loaded profile config "functional-354053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:24:03.246661 1026501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 14:24:03.246708 1026501 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:24:03.262107 1026501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45337
I0127 14:24:03.262677 1026501 main.go:141] libmachine: () Calling .GetVersion
I0127 14:24:03.263380 1026501 main.go:141] libmachine: Using API Version  1
I0127 14:24:03.263408 1026501 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:24:03.263750 1026501 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:24:03.263950 1026501 main.go:141] libmachine: (functional-354053) Calling .GetState
I0127 14:24:03.265763 1026501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 14:24:03.265804 1026501 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:24:03.281949 1026501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45629
I0127 14:24:03.282393 1026501 main.go:141] libmachine: () Calling .GetVersion
I0127 14:24:03.282938 1026501 main.go:141] libmachine: Using API Version  1
I0127 14:24:03.282968 1026501 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:24:03.283315 1026501 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:24:03.283520 1026501 main.go:141] libmachine: (functional-354053) Calling .DriverName
I0127 14:24:03.283698 1026501 ssh_runner.go:195] Run: systemctl --version
I0127 14:24:03.283723 1026501 main.go:141] libmachine: (functional-354053) Calling .GetSSHHostname
I0127 14:24:03.286248 1026501 main.go:141] libmachine: (functional-354053) DBG | domain functional-354053 has defined MAC address 52:54:00:ee:7b:60 in network mk-functional-354053
I0127 14:24:03.286661 1026501 main.go:141] libmachine: (functional-354053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:7b:60", ip: ""} in network mk-functional-354053: {Iface:virbr1 ExpiryTime:2025-01-27 15:21:06 +0000 UTC Type:0 Mac:52:54:00:ee:7b:60 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:functional-354053 Clientid:01:52:54:00:ee:7b:60}
I0127 14:24:03.286693 1026501 main.go:141] libmachine: (functional-354053) DBG | domain functional-354053 has defined IP address 192.168.39.247 and MAC address 52:54:00:ee:7b:60 in network mk-functional-354053
I0127 14:24:03.286865 1026501 main.go:141] libmachine: (functional-354053) Calling .GetSSHPort
I0127 14:24:03.287048 1026501 main.go:141] libmachine: (functional-354053) Calling .GetSSHKeyPath
I0127 14:24:03.287202 1026501 main.go:141] libmachine: (functional-354053) Calling .GetSSHUsername
I0127 14:24:03.287348 1026501 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/functional-354053/id_rsa Username:docker}
I0127 14:24:03.367904 1026501 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 14:24:03.406629 1026501 main.go:141] libmachine: Making call to close driver server
I0127 14:24:03.406648 1026501 main.go:141] libmachine: (functional-354053) Calling .Close
I0127 14:24:03.407044 1026501 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:24:03.407066 1026501 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:24:03.407042 1026501 main.go:141] libmachine: (functional-354053) DBG | Closing plugin on server side
I0127 14:24:03.407085 1026501 main.go:141] libmachine: Making call to close driver server
I0127 14:24:03.407095 1026501 main.go:141] libmachine: (functional-354053) Calling .Close
I0127 14:24:03.407342 1026501 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:24:03.407360 1026501 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:24:03.407379 1026501 main.go:141] libmachine: (functional-354053) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-354053 ssh pgrep buildkitd: exit status 1 (198.100008ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 image build -t localhost/my-image:functional-354053 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-354053 image build -t localhost/my-image:functional-354053 testdata/build --alsologtostderr: (2.032352625s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-354053 image build -t localhost/my-image:functional-354053 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 5dc4ef010e5
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-354053
--> 44fd39deecd
Successfully tagged localhost/my-image:functional-354053
44fd39deecd68cc2d35b57f422f950ae673c09569904221e888fdd9f8f0d752b
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-354053 image build -t localhost/my-image:functional-354053 testdata/build --alsologtostderr:
I0127 14:24:03.661989 1026557 out.go:345] Setting OutFile to fd 1 ...
I0127 14:24:03.662142 1026557 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:24:03.662154 1026557 out.go:358] Setting ErrFile to fd 2...
I0127 14:24:03.662163 1026557 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:24:03.662383 1026557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
I0127 14:24:03.663049 1026557 config.go:182] Loaded profile config "functional-354053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:24:03.663690 1026557 config.go:182] Loaded profile config "functional-354053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 14:24:03.664044 1026557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 14:24:03.664107 1026557 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:24:03.680019 1026557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38389
I0127 14:24:03.680506 1026557 main.go:141] libmachine: () Calling .GetVersion
I0127 14:24:03.681153 1026557 main.go:141] libmachine: Using API Version  1
I0127 14:24:03.681185 1026557 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:24:03.681555 1026557 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:24:03.681810 1026557 main.go:141] libmachine: (functional-354053) Calling .GetState
I0127 14:24:03.683600 1026557 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 14:24:03.683645 1026557 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:24:03.699061 1026557 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44537
I0127 14:24:03.699537 1026557 main.go:141] libmachine: () Calling .GetVersion
I0127 14:24:03.700047 1026557 main.go:141] libmachine: Using API Version  1
I0127 14:24:03.700071 1026557 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:24:03.700433 1026557 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:24:03.700623 1026557 main.go:141] libmachine: (functional-354053) Calling .DriverName
I0127 14:24:03.700877 1026557 ssh_runner.go:195] Run: systemctl --version
I0127 14:24:03.700907 1026557 main.go:141] libmachine: (functional-354053) Calling .GetSSHHostname
I0127 14:24:03.703911 1026557 main.go:141] libmachine: (functional-354053) DBG | domain functional-354053 has defined MAC address 52:54:00:ee:7b:60 in network mk-functional-354053
I0127 14:24:03.704362 1026557 main.go:141] libmachine: (functional-354053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:7b:60", ip: ""} in network mk-functional-354053: {Iface:virbr1 ExpiryTime:2025-01-27 15:21:06 +0000 UTC Type:0 Mac:52:54:00:ee:7b:60 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:functional-354053 Clientid:01:52:54:00:ee:7b:60}
I0127 14:24:03.704401 1026557 main.go:141] libmachine: (functional-354053) DBG | domain functional-354053 has defined IP address 192.168.39.247 and MAC address 52:54:00:ee:7b:60 in network mk-functional-354053
I0127 14:24:03.704590 1026557 main.go:141] libmachine: (functional-354053) Calling .GetSSHPort
I0127 14:24:03.704795 1026557 main.go:141] libmachine: (functional-354053) Calling .GetSSHKeyPath
I0127 14:24:03.705047 1026557 main.go:141] libmachine: (functional-354053) Calling .GetSSHUsername
I0127 14:24:03.705247 1026557 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/functional-354053/id_rsa Username:docker}
I0127 14:24:03.787876 1026557 build_images.go:161] Building image from path: /tmp/build.2561566373.tar
I0127 14:24:03.787957 1026557 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0127 14:24:03.799186 1026557 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2561566373.tar
I0127 14:24:03.803961 1026557 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2561566373.tar: stat -c "%s %y" /var/lib/minikube/build/build.2561566373.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2561566373.tar': No such file or directory
I0127 14:24:03.804006 1026557 ssh_runner.go:362] scp /tmp/build.2561566373.tar --> /var/lib/minikube/build/build.2561566373.tar (3072 bytes)
I0127 14:24:03.833464 1026557 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2561566373
I0127 14:24:03.845312 1026557 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2561566373 -xf /var/lib/minikube/build/build.2561566373.tar
I0127 14:24:03.855716 1026557 crio.go:315] Building image: /var/lib/minikube/build/build.2561566373
I0127 14:24:03.855791 1026557 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-354053 /var/lib/minikube/build/build.2561566373 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0127 14:24:05.613495 1026557 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-354053 /var/lib/minikube/build/build.2561566373 --cgroup-manager=cgroupfs: (1.757677052s)
I0127 14:24:05.613602 1026557 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2561566373
I0127 14:24:05.628354 1026557 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2561566373.tar
I0127 14:24:05.637959 1026557 build_images.go:217] Built localhost/my-image:functional-354053 from /tmp/build.2561566373.tar
I0127 14:24:05.637998 1026557 build_images.go:133] succeeded building to: functional-354053
I0127 14:24:05.638004 1026557 build_images.go:134] failed building to: 
I0127 14:24:05.638035 1026557 main.go:141] libmachine: Making call to close driver server
I0127 14:24:05.638050 1026557 main.go:141] libmachine: (functional-354053) Calling .Close
I0127 14:24:05.638390 1026557 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:24:05.638410 1026557 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:24:05.638417 1026557 main.go:141] libmachine: Making call to close driver server
I0127 14:24:05.638486 1026557 main.go:141] libmachine: (functional-354053) DBG | Closing plugin on server side
I0127 14:24:05.638501 1026557 main.go:141] libmachine: (functional-354053) Calling .Close
I0127 14:24:05.638745 1026557 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:24:05.638761 1026557 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:24:05.638797 1026557 main.go:141] libmachine: (functional-354053) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.983926796s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-354053
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.01s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-354053 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-354053 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-bfvhm" [c90a774f-43ce-446b-a2ce-7cbbca5e3a7a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-bfvhm" [c90a774f-43ce-446b-a2ce-7cbbca5e3a7a] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003456239s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 image load --daemon kicbase/echo-server:functional-354053 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-354053 image load --daemon kicbase/echo-server:functional-354053 --alsologtostderr: (2.210003558s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 image load --daemon kicbase/echo-server:functional-354053 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-354053
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 image load --daemon kicbase/echo-server:functional-354053 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 image save kicbase/echo-server:functional-354053 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 image rm kicbase/echo-server:functional-354053 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-354053
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 image save --daemon kicbase/echo-server:functional-354053 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-354053
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "270.972636ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "55.26522ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "275.780011ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "49.89221ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (30.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-354053 /tmp/TestFunctionalparallelMountCmdany-port2100981818/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737987807986625236" to /tmp/TestFunctionalparallelMountCmdany-port2100981818/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737987807986625236" to /tmp/TestFunctionalparallelMountCmdany-port2100981818/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737987807986625236" to /tmp/TestFunctionalparallelMountCmdany-port2100981818/001/test-1737987807986625236
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-354053 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (228.700851ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 14:23:28.215689 1012816 retry.go:31] will retry after 570.103428ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 14:23 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 14:23 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 14:23 test-1737987807986625236
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh cat /mount-9p/test-1737987807986625236
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-354053 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [28522223-1c9a-40dc-bf05-aba4db084b30] Pending
helpers_test.go:344: "busybox-mount" [28522223-1c9a-40dc-bf05-aba4db084b30] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [28522223-1c9a-40dc-bf05-aba4db084b30] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [28522223-1c9a-40dc-bf05-aba4db084b30] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 28.003784284s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-354053 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-354053 /tmp/TestFunctionalparallelMountCmdany-port2100981818/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (30.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 service list -o json
functional_test.go:1494: Took "255.351188ms" to run "out/minikube-linux-amd64 -p functional-354053 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.247:31302
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.247:31302
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-354053 /tmp/TestFunctionalparallelMountCmdspecific-port3274950700/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-354053 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (200.72917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 14:23:59.039142 1012816 retry.go:31] will retry after 736.716606ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-354053 /tmp/TestFunctionalparallelMountCmdspecific-port3274950700/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-354053 ssh "sudo umount -f /mount-9p": exit status 1 (200.492836ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-354053 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-354053 /tmp/TestFunctionalparallelMountCmdspecific-port3274950700/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-354053 /tmp/TestFunctionalparallelMountCmdVerifyCleanup23558551/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-354053 /tmp/TestFunctionalparallelMountCmdVerifyCleanup23558551/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-354053 /tmp/TestFunctionalparallelMountCmdVerifyCleanup23558551/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-354053 ssh "findmnt -T" /mount1: exit status 1 (231.594001ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 14:24:01.010320 1012816 retry.go:31] will retry after 504.183546ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-354053 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-354053 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-354053 /tmp/TestFunctionalparallelMountCmdVerifyCleanup23558551/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-354053 /tmp/TestFunctionalparallelMountCmdVerifyCleanup23558551/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-354053 /tmp/TestFunctionalparallelMountCmdVerifyCleanup23558551/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-354053
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-354053
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-354053
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (232.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-119350 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 14:34:06.238350 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-119350 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m52.238760096s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (232.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-119350 -- rollout status deployment/busybox: (3.051607247s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- exec busybox-58667487b6-cz8th -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- exec busybox-58667487b6-kfrps -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- exec busybox-58667487b6-pljh8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- exec busybox-58667487b6-cz8th -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- exec busybox-58667487b6-kfrps -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- exec busybox-58667487b6-pljh8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- exec busybox-58667487b6-cz8th -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- exec busybox-58667487b6-kfrps -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- exec busybox-58667487b6-pljh8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- exec busybox-58667487b6-cz8th -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- exec busybox-58667487b6-cz8th -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- exec busybox-58667487b6-kfrps -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- exec busybox-58667487b6-kfrps -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- exec busybox-58667487b6-pljh8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-119350 -- exec busybox-58667487b6-pljh8 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-119350 -v=7 --alsologtostderr
E0127 14:38:16.947764 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:38:16.954214 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:38:16.965901 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:38:16.987403 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:38:17.028920 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:38:17.110487 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:38:17.272088 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:38:17.594089 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:38:18.235465 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:38:19.516885 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:38:22.078884 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:38:27.201155 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-119350 -v=7 --alsologtostderr: (54.321549608s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-119350 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp testdata/cp-test.txt ha-119350:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp ha-119350:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2486024607/001/cp-test_ha-119350.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp ha-119350:/home/docker/cp-test.txt ha-119350-m02:/home/docker/cp-test_ha-119350_ha-119350-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m02 "sudo cat /home/docker/cp-test_ha-119350_ha-119350-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp ha-119350:/home/docker/cp-test.txt ha-119350-m03:/home/docker/cp-test_ha-119350_ha-119350-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m03 "sudo cat /home/docker/cp-test_ha-119350_ha-119350-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp ha-119350:/home/docker/cp-test.txt ha-119350-m04:/home/docker/cp-test_ha-119350_ha-119350-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m04 "sudo cat /home/docker/cp-test_ha-119350_ha-119350-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp testdata/cp-test.txt ha-119350-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp ha-119350-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2486024607/001/cp-test_ha-119350-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp ha-119350-m02:/home/docker/cp-test.txt ha-119350:/home/docker/cp-test_ha-119350-m02_ha-119350.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350 "sudo cat /home/docker/cp-test_ha-119350-m02_ha-119350.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp ha-119350-m02:/home/docker/cp-test.txt ha-119350-m03:/home/docker/cp-test_ha-119350-m02_ha-119350-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m03 "sudo cat /home/docker/cp-test_ha-119350-m02_ha-119350-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp ha-119350-m02:/home/docker/cp-test.txt ha-119350-m04:/home/docker/cp-test_ha-119350-m02_ha-119350-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m04 "sudo cat /home/docker/cp-test_ha-119350-m02_ha-119350-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp testdata/cp-test.txt ha-119350-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp ha-119350-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2486024607/001/cp-test_ha-119350-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp ha-119350-m03:/home/docker/cp-test.txt ha-119350:/home/docker/cp-test_ha-119350-m03_ha-119350.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m03 "sudo cat /home/docker/cp-test.txt"
E0127 14:38:37.443218 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350 "sudo cat /home/docker/cp-test_ha-119350-m03_ha-119350.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp ha-119350-m03:/home/docker/cp-test.txt ha-119350-m02:/home/docker/cp-test_ha-119350-m03_ha-119350-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m02 "sudo cat /home/docker/cp-test_ha-119350-m03_ha-119350-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp ha-119350-m03:/home/docker/cp-test.txt ha-119350-m04:/home/docker/cp-test_ha-119350-m03_ha-119350-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m04 "sudo cat /home/docker/cp-test_ha-119350-m03_ha-119350-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp testdata/cp-test.txt ha-119350-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp ha-119350-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2486024607/001/cp-test_ha-119350-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp ha-119350-m04:/home/docker/cp-test.txt ha-119350:/home/docker/cp-test_ha-119350-m04_ha-119350.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350 "sudo cat /home/docker/cp-test_ha-119350-m04_ha-119350.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp ha-119350-m04:/home/docker/cp-test.txt ha-119350-m02:/home/docker/cp-test_ha-119350-m04_ha-119350-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m02 "sudo cat /home/docker/cp-test_ha-119350-m04_ha-119350-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 cp ha-119350-m04:/home/docker/cp-test.txt ha-119350-m03:/home/docker/cp-test_ha-119350-m04_ha-119350-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 ssh -n ha-119350-m03 "sudo cat /home/docker/cp-test_ha-119350-m04_ha-119350-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 node stop m02 -v=7 --alsologtostderr
E0127 14:38:57.924759 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:39:06.238336 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:39:38.886214 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-119350 node stop m02 -v=7 --alsologtostderr: (1m30.792205619s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-119350 status -v=7 --alsologtostderr: exit status 7 (678.196645ms)

                                                
                                                
-- stdout --
	ha-119350
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-119350-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-119350-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-119350-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:40:13.126854 1033506 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:40:13.127131 1033506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:40:13.127141 1033506 out.go:358] Setting ErrFile to fd 2...
	I0127 14:40:13.127145 1033506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:40:13.127323 1033506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 14:40:13.127520 1033506 out.go:352] Setting JSON to false
	I0127 14:40:13.127549 1033506 mustload.go:65] Loading cluster: ha-119350
	I0127 14:40:13.127597 1033506 notify.go:220] Checking for updates...
	I0127 14:40:13.128144 1033506 config.go:182] Loaded profile config "ha-119350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:40:13.128180 1033506 status.go:174] checking status of ha-119350 ...
	I0127 14:40:13.128690 1033506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:40:13.128742 1033506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:40:13.150502 1033506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40813
	I0127 14:40:13.151050 1033506 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:40:13.151678 1033506 main.go:141] libmachine: Using API Version  1
	I0127 14:40:13.151702 1033506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:40:13.152077 1033506 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:40:13.152326 1033506 main.go:141] libmachine: (ha-119350) Calling .GetState
	I0127 14:40:13.154167 1033506 status.go:371] ha-119350 host status = "Running" (err=<nil>)
	I0127 14:40:13.154185 1033506 host.go:66] Checking if "ha-119350" exists ...
	I0127 14:40:13.154554 1033506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:40:13.154607 1033506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:40:13.171372 1033506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39961
	I0127 14:40:13.171898 1033506 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:40:13.172437 1033506 main.go:141] libmachine: Using API Version  1
	I0127 14:40:13.172464 1033506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:40:13.172818 1033506 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:40:13.173087 1033506 main.go:141] libmachine: (ha-119350) Calling .GetIP
	I0127 14:40:13.176096 1033506 main.go:141] libmachine: (ha-119350) DBG | domain ha-119350 has defined MAC address 52:54:00:d2:f7:15 in network mk-ha-119350
	I0127 14:40:13.176559 1033506 main.go:141] libmachine: (ha-119350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f7:15", ip: ""} in network mk-ha-119350: {Iface:virbr1 ExpiryTime:2025-01-27 15:33:48 +0000 UTC Type:0 Mac:52:54:00:d2:f7:15 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-119350 Clientid:01:52:54:00:d2:f7:15}
	I0127 14:40:13.176591 1033506 main.go:141] libmachine: (ha-119350) DBG | domain ha-119350 has defined IP address 192.168.39.140 and MAC address 52:54:00:d2:f7:15 in network mk-ha-119350
	I0127 14:40:13.176724 1033506 host.go:66] Checking if "ha-119350" exists ...
	I0127 14:40:13.177067 1033506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:40:13.177116 1033506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:40:13.192036 1033506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35383
	I0127 14:40:13.192561 1033506 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:40:13.193149 1033506 main.go:141] libmachine: Using API Version  1
	I0127 14:40:13.193169 1033506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:40:13.193593 1033506 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:40:13.193811 1033506 main.go:141] libmachine: (ha-119350) Calling .DriverName
	I0127 14:40:13.194022 1033506 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 14:40:13.194081 1033506 main.go:141] libmachine: (ha-119350) Calling .GetSSHHostname
	I0127 14:40:13.197424 1033506 main.go:141] libmachine: (ha-119350) DBG | domain ha-119350 has defined MAC address 52:54:00:d2:f7:15 in network mk-ha-119350
	I0127 14:40:13.197801 1033506 main.go:141] libmachine: (ha-119350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:f7:15", ip: ""} in network mk-ha-119350: {Iface:virbr1 ExpiryTime:2025-01-27 15:33:48 +0000 UTC Type:0 Mac:52:54:00:d2:f7:15 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-119350 Clientid:01:52:54:00:d2:f7:15}
	I0127 14:40:13.197831 1033506 main.go:141] libmachine: (ha-119350) DBG | domain ha-119350 has defined IP address 192.168.39.140 and MAC address 52:54:00:d2:f7:15 in network mk-ha-119350
	I0127 14:40:13.197990 1033506 main.go:141] libmachine: (ha-119350) Calling .GetSSHPort
	I0127 14:40:13.198169 1033506 main.go:141] libmachine: (ha-119350) Calling .GetSSHKeyPath
	I0127 14:40:13.198327 1033506 main.go:141] libmachine: (ha-119350) Calling .GetSSHUsername
	I0127 14:40:13.198474 1033506 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/ha-119350/id_rsa Username:docker}
	I0127 14:40:13.282876 1033506 ssh_runner.go:195] Run: systemctl --version
	I0127 14:40:13.290690 1033506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:40:13.312458 1033506 kubeconfig.go:125] found "ha-119350" server: "https://192.168.39.254:8443"
	I0127 14:40:13.312496 1033506 api_server.go:166] Checking apiserver status ...
	I0127 14:40:13.312542 1033506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:40:13.330494 1033506 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1149/cgroup
	W0127 14:40:13.341222 1033506 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1149/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 14:40:13.341323 1033506 ssh_runner.go:195] Run: ls
	I0127 14:40:13.346315 1033506 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 14:40:13.351627 1033506 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 14:40:13.351657 1033506 status.go:463] ha-119350 apiserver status = Running (err=<nil>)
	I0127 14:40:13.351694 1033506 status.go:176] ha-119350 status: &{Name:ha-119350 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:40:13.351732 1033506 status.go:174] checking status of ha-119350-m02 ...
	I0127 14:40:13.352048 1033506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:40:13.352098 1033506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:40:13.367682 1033506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38207
	I0127 14:40:13.368125 1033506 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:40:13.368643 1033506 main.go:141] libmachine: Using API Version  1
	I0127 14:40:13.368669 1033506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:40:13.368974 1033506 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:40:13.369197 1033506 main.go:141] libmachine: (ha-119350-m02) Calling .GetState
	I0127 14:40:13.370827 1033506 status.go:371] ha-119350-m02 host status = "Stopped" (err=<nil>)
	I0127 14:40:13.370845 1033506 status.go:384] host is not running, skipping remaining checks
	I0127 14:40:13.370853 1033506 status.go:176] ha-119350-m02 status: &{Name:ha-119350-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:40:13.370888 1033506 status.go:174] checking status of ha-119350-m03 ...
	I0127 14:40:13.371178 1033506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:40:13.371216 1033506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:40:13.388462 1033506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36141
	I0127 14:40:13.389045 1033506 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:40:13.389553 1033506 main.go:141] libmachine: Using API Version  1
	I0127 14:40:13.389572 1033506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:40:13.389963 1033506 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:40:13.390198 1033506 main.go:141] libmachine: (ha-119350-m03) Calling .GetState
	I0127 14:40:13.391784 1033506 status.go:371] ha-119350-m03 host status = "Running" (err=<nil>)
	I0127 14:40:13.391802 1033506 host.go:66] Checking if "ha-119350-m03" exists ...
	I0127 14:40:13.392134 1033506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:40:13.392183 1033506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:40:13.408005 1033506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39611
	I0127 14:40:13.408459 1033506 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:40:13.408975 1033506 main.go:141] libmachine: Using API Version  1
	I0127 14:40:13.408999 1033506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:40:13.409350 1033506 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:40:13.409616 1033506 main.go:141] libmachine: (ha-119350-m03) Calling .GetIP
	I0127 14:40:13.412532 1033506 main.go:141] libmachine: (ha-119350-m03) DBG | domain ha-119350-m03 has defined MAC address 52:54:00:d9:35:f4 in network mk-ha-119350
	I0127 14:40:13.413026 1033506 main.go:141] libmachine: (ha-119350-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:35:f4", ip: ""} in network mk-ha-119350: {Iface:virbr1 ExpiryTime:2025-01-27 15:36:19 +0000 UTC Type:0 Mac:52:54:00:d9:35:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-119350-m03 Clientid:01:52:54:00:d9:35:f4}
	I0127 14:40:13.413058 1033506 main.go:141] libmachine: (ha-119350-m03) DBG | domain ha-119350-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d9:35:f4 in network mk-ha-119350
	I0127 14:40:13.413167 1033506 host.go:66] Checking if "ha-119350-m03" exists ...
	I0127 14:40:13.413479 1033506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:40:13.413524 1033506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:40:13.430432 1033506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36643
	I0127 14:40:13.430900 1033506 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:40:13.431432 1033506 main.go:141] libmachine: Using API Version  1
	I0127 14:40:13.431462 1033506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:40:13.431839 1033506 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:40:13.432080 1033506 main.go:141] libmachine: (ha-119350-m03) Calling .DriverName
	I0127 14:40:13.432310 1033506 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 14:40:13.432341 1033506 main.go:141] libmachine: (ha-119350-m03) Calling .GetSSHHostname
	I0127 14:40:13.435665 1033506 main.go:141] libmachine: (ha-119350-m03) DBG | domain ha-119350-m03 has defined MAC address 52:54:00:d9:35:f4 in network mk-ha-119350
	I0127 14:40:13.436134 1033506 main.go:141] libmachine: (ha-119350-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:35:f4", ip: ""} in network mk-ha-119350: {Iface:virbr1 ExpiryTime:2025-01-27 15:36:19 +0000 UTC Type:0 Mac:52:54:00:d9:35:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-119350-m03 Clientid:01:52:54:00:d9:35:f4}
	I0127 14:40:13.436162 1033506 main.go:141] libmachine: (ha-119350-m03) DBG | domain ha-119350-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d9:35:f4 in network mk-ha-119350
	I0127 14:40:13.436477 1033506 main.go:141] libmachine: (ha-119350-m03) Calling .GetSSHPort
	I0127 14:40:13.436692 1033506 main.go:141] libmachine: (ha-119350-m03) Calling .GetSSHKeyPath
	I0127 14:40:13.436828 1033506 main.go:141] libmachine: (ha-119350-m03) Calling .GetSSHUsername
	I0127 14:40:13.436957 1033506 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/ha-119350-m03/id_rsa Username:docker}
	I0127 14:40:13.522029 1033506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:40:13.546245 1033506 kubeconfig.go:125] found "ha-119350" server: "https://192.168.39.254:8443"
	I0127 14:40:13.546279 1033506 api_server.go:166] Checking apiserver status ...
	I0127 14:40:13.546329 1033506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:40:13.563004 1033506 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1453/cgroup
	W0127 14:40:13.573617 1033506 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1453/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 14:40:13.573693 1033506 ssh_runner.go:195] Run: ls
	I0127 14:40:13.579001 1033506 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 14:40:13.583800 1033506 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 14:40:13.583824 1033506 status.go:463] ha-119350-m03 apiserver status = Running (err=<nil>)
	I0127 14:40:13.583832 1033506 status.go:176] ha-119350-m03 status: &{Name:ha-119350-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:40:13.583857 1033506 status.go:174] checking status of ha-119350-m04 ...
	I0127 14:40:13.584172 1033506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:40:13.584216 1033506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:40:13.599897 1033506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35449
	I0127 14:40:13.600333 1033506 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:40:13.600878 1033506 main.go:141] libmachine: Using API Version  1
	I0127 14:40:13.600906 1033506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:40:13.601287 1033506 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:40:13.601499 1033506 main.go:141] libmachine: (ha-119350-m04) Calling .GetState
	I0127 14:40:13.603218 1033506 status.go:371] ha-119350-m04 host status = "Running" (err=<nil>)
	I0127 14:40:13.603236 1033506 host.go:66] Checking if "ha-119350-m04" exists ...
	I0127 14:40:13.603523 1033506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:40:13.603563 1033506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:40:13.619074 1033506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36613
	I0127 14:40:13.619635 1033506 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:40:13.620211 1033506 main.go:141] libmachine: Using API Version  1
	I0127 14:40:13.620241 1033506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:40:13.620629 1033506 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:40:13.620830 1033506 main.go:141] libmachine: (ha-119350-m04) Calling .GetIP
	I0127 14:40:13.623455 1033506 main.go:141] libmachine: (ha-119350-m04) DBG | domain ha-119350-m04 has defined MAC address 52:54:00:c0:c9:91 in network mk-ha-119350
	I0127 14:40:13.623958 1033506 main.go:141] libmachine: (ha-119350-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:c9:91", ip: ""} in network mk-ha-119350: {Iface:virbr1 ExpiryTime:2025-01-27 15:37:49 +0000 UTC Type:0 Mac:52:54:00:c0:c9:91 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-119350-m04 Clientid:01:52:54:00:c0:c9:91}
	I0127 14:40:13.623990 1033506 main.go:141] libmachine: (ha-119350-m04) DBG | domain ha-119350-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:c0:c9:91 in network mk-ha-119350
	I0127 14:40:13.624098 1033506 host.go:66] Checking if "ha-119350-m04" exists ...
	I0127 14:40:13.624399 1033506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:40:13.624441 1033506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:40:13.641565 1033506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42247
	I0127 14:40:13.642101 1033506 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:40:13.642639 1033506 main.go:141] libmachine: Using API Version  1
	I0127 14:40:13.642660 1033506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:40:13.643101 1033506 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:40:13.643313 1033506 main.go:141] libmachine: (ha-119350-m04) Calling .DriverName
	I0127 14:40:13.643507 1033506 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 14:40:13.643528 1033506 main.go:141] libmachine: (ha-119350-m04) Calling .GetSSHHostname
	I0127 14:40:13.646275 1033506 main.go:141] libmachine: (ha-119350-m04) DBG | domain ha-119350-m04 has defined MAC address 52:54:00:c0:c9:91 in network mk-ha-119350
	I0127 14:40:13.646635 1033506 main.go:141] libmachine: (ha-119350-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:c9:91", ip: ""} in network mk-ha-119350: {Iface:virbr1 ExpiryTime:2025-01-27 15:37:49 +0000 UTC Type:0 Mac:52:54:00:c0:c9:91 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-119350-m04 Clientid:01:52:54:00:c0:c9:91}
	I0127 14:40:13.646663 1033506 main.go:141] libmachine: (ha-119350-m04) DBG | domain ha-119350-m04 has defined IP address 192.168.39.32 and MAC address 52:54:00:c0:c9:91 in network mk-ha-119350
	I0127 14:40:13.646805 1033506 main.go:141] libmachine: (ha-119350-m04) Calling .GetSSHPort
	I0127 14:40:13.646998 1033506 main.go:141] libmachine: (ha-119350-m04) Calling .GetSSHKeyPath
	I0127 14:40:13.647127 1033506 main.go:141] libmachine: (ha-119350-m04) Calling .GetSSHUsername
	I0127 14:40:13.647261 1033506 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/ha-119350-m04/id_rsa Username:docker}
	I0127 14:40:13.734039 1033506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:40:13.750686 1033506 status.go:176] ha-119350-m04 status: &{Name:ha-119350-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (62.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 node start m02 -v=7 --alsologtostderr
E0127 14:40:29.301854 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:41:00.807584 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-119350 node start m02 -v=7 --alsologtostderr: (1m1.238949155s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (62.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (428.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-119350 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-119350 -v=7 --alsologtostderr
E0127 14:43:16.947466 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:43:44.649129 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:44:06.238218 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-119350 -v=7 --alsologtostderr: (4m34.8733227s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-119350 --wait=true -v=7 --alsologtostderr
E0127 14:48:16.948016 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-119350 --wait=true -v=7 --alsologtostderr: (2m33.535066466s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-119350
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (428.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-119350 node delete m03 -v=7 --alsologtostderr: (17.524663103s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 stop -v=7 --alsologtostderr
E0127 14:49:06.238262 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:53:16.947783 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-119350 stop -v=7 --alsologtostderr: (4m32.637059497s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-119350 status -v=7 --alsologtostderr: exit status 7 (115.71608ms)

                                                
                                                
-- stdout --
	ha-119350
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-119350-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-119350-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:53:17.677548 1037710 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:53:17.677664 1037710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:53:17.677672 1037710 out.go:358] Setting ErrFile to fd 2...
	I0127 14:53:17.677677 1037710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:53:17.677898 1037710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 14:53:17.678101 1037710 out.go:352] Setting JSON to false
	I0127 14:53:17.678135 1037710 mustload.go:65] Loading cluster: ha-119350
	I0127 14:53:17.678185 1037710 notify.go:220] Checking for updates...
	I0127 14:53:17.678548 1037710 config.go:182] Loaded profile config "ha-119350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 14:53:17.678573 1037710 status.go:174] checking status of ha-119350 ...
	I0127 14:53:17.679029 1037710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:53:17.679080 1037710 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:53:17.700288 1037710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40869
	I0127 14:53:17.700758 1037710 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:53:17.701354 1037710 main.go:141] libmachine: Using API Version  1
	I0127 14:53:17.701381 1037710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:53:17.701780 1037710 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:53:17.702042 1037710 main.go:141] libmachine: (ha-119350) Calling .GetState
	I0127 14:53:17.703861 1037710 status.go:371] ha-119350 host status = "Stopped" (err=<nil>)
	I0127 14:53:17.703875 1037710 status.go:384] host is not running, skipping remaining checks
	I0127 14:53:17.703881 1037710 status.go:176] ha-119350 status: &{Name:ha-119350 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:53:17.703904 1037710 status.go:174] checking status of ha-119350-m02 ...
	I0127 14:53:17.704235 1037710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:53:17.704271 1037710 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:53:17.719384 1037710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45425
	I0127 14:53:17.719778 1037710 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:53:17.720306 1037710 main.go:141] libmachine: Using API Version  1
	I0127 14:53:17.720335 1037710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:53:17.720661 1037710 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:53:17.720906 1037710 main.go:141] libmachine: (ha-119350-m02) Calling .GetState
	I0127 14:53:17.722505 1037710 status.go:371] ha-119350-m02 host status = "Stopped" (err=<nil>)
	I0127 14:53:17.722519 1037710 status.go:384] host is not running, skipping remaining checks
	I0127 14:53:17.722525 1037710 status.go:176] ha-119350-m02 status: &{Name:ha-119350-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:53:17.722550 1037710 status.go:174] checking status of ha-119350-m04 ...
	I0127 14:53:17.722841 1037710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 14:53:17.722882 1037710 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:53:17.738158 1037710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I0127 14:53:17.738689 1037710 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:53:17.739180 1037710 main.go:141] libmachine: Using API Version  1
	I0127 14:53:17.739205 1037710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:53:17.739567 1037710 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:53:17.739762 1037710 main.go:141] libmachine: (ha-119350-m04) Calling .GetState
	I0127 14:53:17.741683 1037710 status.go:371] ha-119350-m04 host status = "Stopped" (err=<nil>)
	I0127 14:53:17.741699 1037710 status.go:384] host is not running, skipping remaining checks
	I0127 14:53:17.741707 1037710 status.go:176] ha-119350-m04 status: &{Name:ha-119350-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (131.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-119350 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 14:54:06.238960 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:54:40.010565 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-119350 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m11.011875071s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (131.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (113.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-119350 --control-plane -v=7 --alsologtostderr
E0127 14:57:09.304219 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-119350 --control-plane -v=7 --alsologtostderr: (1m53.123520222s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-119350 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (113.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.85s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-619148 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0127 14:58:16.947663 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-619148 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (59.85223797s)
--- PASS: TestJSONOutput/start/Command (59.85s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-619148 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-619148 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-619148 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-619148 --output=json --user=testUser: (7.381353515s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-689080 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-689080 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (69.413548ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"592e267a-5000-43c9-965d-e2219fcbbf80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-689080] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"811bd730-fd1e-428d-9849-d759ae80b50d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20321"}}
	{"specversion":"1.0","id":"1b2f9fe6-2ffc-462b-8425-86714328b3d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c035f012-2900-4354-9cc0-0950db146cd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig"}}
	{"specversion":"1.0","id":"98480187-4c47-4a83-ba54-6b72d81b8233","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube"}}
	{"specversion":"1.0","id":"68aa29db-d783-4a3d-8d75-9c1aded618ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9e0306bc-0800-4947-ac3d-fc6a60c75317","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"64de9e69-b9ff-416a-97db-67c01c357b12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-689080" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-689080
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (94.73s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-251005 --driver=kvm2  --container-runtime=crio
E0127 14:59:06.238459 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-251005 --driver=kvm2  --container-runtime=crio: (45.014387535s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-265260 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-265260 --driver=kvm2  --container-runtime=crio: (46.520930918s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-251005
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-265260
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-265260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-265260
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-265260: (1.016021048s)
helpers_test.go:175: Cleaning up "first-251005" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-251005
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-251005: (1.030033013s)
--- PASS: TestMinikubeProfile (94.73s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-458260 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-458260 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.917948323s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-458260 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-458260 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-473780 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-473780 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.981670861s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-473780 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-473780 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.14s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-458260 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-458260 --alsologtostderr -v=5: (1.134916997s)
--- PASS: TestMountStart/serial/DeleteFirst (1.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-473780 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-473780 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-473780
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-473780: (1.28138318s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.81s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-473780
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-473780: (22.808352219s)
--- PASS: TestMountStart/serial/RestartStopped (23.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-473780 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-473780 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-779469 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 15:03:16.947805 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-779469 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m50.072357769s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-779469 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-779469 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-779469 -- rollout status deployment/busybox: (2.718850807s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-779469 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-779469 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-779469 -- exec busybox-58667487b6-ht9bw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-779469 -- exec busybox-58667487b6-wq6wg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-779469 -- exec busybox-58667487b6-ht9bw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-779469 -- exec busybox-58667487b6-wq6wg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-779469 -- exec busybox-58667487b6-ht9bw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-779469 -- exec busybox-58667487b6-wq6wg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.23s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-779469 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-779469 -- exec busybox-58667487b6-ht9bw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-779469 -- exec busybox-58667487b6-ht9bw -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-779469 -- exec busybox-58667487b6-wq6wg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-779469 -- exec busybox-58667487b6-wq6wg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-779469 -v 3 --alsologtostderr
E0127 15:04:06.238897 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-779469 -v 3 --alsologtostderr: (49.974208242s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.57s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-779469 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 cp testdata/cp-test.txt multinode-779469:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 ssh -n multinode-779469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 cp multinode-779469:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3285020886/001/cp-test_multinode-779469.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 ssh -n multinode-779469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 cp multinode-779469:/home/docker/cp-test.txt multinode-779469-m02:/home/docker/cp-test_multinode-779469_multinode-779469-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 ssh -n multinode-779469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 ssh -n multinode-779469-m02 "sudo cat /home/docker/cp-test_multinode-779469_multinode-779469-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 cp multinode-779469:/home/docker/cp-test.txt multinode-779469-m03:/home/docker/cp-test_multinode-779469_multinode-779469-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 ssh -n multinode-779469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 ssh -n multinode-779469-m03 "sudo cat /home/docker/cp-test_multinode-779469_multinode-779469-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 cp testdata/cp-test.txt multinode-779469-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 ssh -n multinode-779469-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 cp multinode-779469-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3285020886/001/cp-test_multinode-779469-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 ssh -n multinode-779469-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 cp multinode-779469-m02:/home/docker/cp-test.txt multinode-779469:/home/docker/cp-test_multinode-779469-m02_multinode-779469.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 ssh -n multinode-779469-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 ssh -n multinode-779469 "sudo cat /home/docker/cp-test_multinode-779469-m02_multinode-779469.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 cp multinode-779469-m02:/home/docker/cp-test.txt multinode-779469-m03:/home/docker/cp-test_multinode-779469-m02_multinode-779469-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 ssh -n multinode-779469-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 ssh -n multinode-779469-m03 "sudo cat /home/docker/cp-test_multinode-779469-m02_multinode-779469-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 cp testdata/cp-test.txt multinode-779469-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 ssh -n multinode-779469-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 cp multinode-779469-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3285020886/001/cp-test_multinode-779469-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 ssh -n multinode-779469-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 cp multinode-779469-m03:/home/docker/cp-test.txt multinode-779469:/home/docker/cp-test_multinode-779469-m03_multinode-779469.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 ssh -n multinode-779469-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 ssh -n multinode-779469 "sudo cat /home/docker/cp-test_multinode-779469-m03_multinode-779469.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 cp multinode-779469-m03:/home/docker/cp-test.txt multinode-779469-m02:/home/docker/cp-test_multinode-779469-m03_multinode-779469-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 ssh -n multinode-779469-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 ssh -n multinode-779469-m02 "sudo cat /home/docker/cp-test_multinode-779469-m03_multinode-779469-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-779469 node stop m03: (1.587985338s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-779469 status: exit status 7 (444.141686ms)

                                                
                                                
-- stdout --
	multinode-779469
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-779469-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-779469-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-779469 status --alsologtostderr: exit status 7 (455.807495ms)

                                                
                                                
-- stdout --
	multinode-779469
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-779469-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-779469-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 15:04:34.662962 1045588 out.go:345] Setting OutFile to fd 1 ...
	I0127 15:04:34.663089 1045588 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:04:34.663098 1045588 out.go:358] Setting ErrFile to fd 2...
	I0127 15:04:34.663102 1045588 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:04:34.663318 1045588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 15:04:34.663557 1045588 out.go:352] Setting JSON to false
	I0127 15:04:34.663604 1045588 mustload.go:65] Loading cluster: multinode-779469
	I0127 15:04:34.663744 1045588 notify.go:220] Checking for updates...
	I0127 15:04:34.664115 1045588 config.go:182] Loaded profile config "multinode-779469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:04:34.664144 1045588 status.go:174] checking status of multinode-779469 ...
	I0127 15:04:34.664650 1045588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:04:34.664694 1045588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:04:34.683230 1045588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46541
	I0127 15:04:34.683732 1045588 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:04:34.684341 1045588 main.go:141] libmachine: Using API Version  1
	I0127 15:04:34.684369 1045588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:04:34.684749 1045588 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:04:34.685075 1045588 main.go:141] libmachine: (multinode-779469) Calling .GetState
	I0127 15:04:34.686976 1045588 status.go:371] multinode-779469 host status = "Running" (err=<nil>)
	I0127 15:04:34.686994 1045588 host.go:66] Checking if "multinode-779469" exists ...
	I0127 15:04:34.687323 1045588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:04:34.687375 1045588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:04:34.703005 1045588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43721
	I0127 15:04:34.703554 1045588 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:04:34.704115 1045588 main.go:141] libmachine: Using API Version  1
	I0127 15:04:34.704139 1045588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:04:34.704478 1045588 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:04:34.704661 1045588 main.go:141] libmachine: (multinode-779469) Calling .GetIP
	I0127 15:04:34.707460 1045588 main.go:141] libmachine: (multinode-779469) DBG | domain multinode-779469 has defined MAC address 52:54:00:20:82:f4 in network mk-multinode-779469
	I0127 15:04:34.707920 1045588 main.go:141] libmachine: (multinode-779469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:82:f4", ip: ""} in network mk-multinode-779469: {Iface:virbr1 ExpiryTime:2025-01-27 16:01:53 +0000 UTC Type:0 Mac:52:54:00:20:82:f4 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:multinode-779469 Clientid:01:52:54:00:20:82:f4}
	I0127 15:04:34.707947 1045588 main.go:141] libmachine: (multinode-779469) DBG | domain multinode-779469 has defined IP address 192.168.39.66 and MAC address 52:54:00:20:82:f4 in network mk-multinode-779469
	I0127 15:04:34.708103 1045588 host.go:66] Checking if "multinode-779469" exists ...
	I0127 15:04:34.708414 1045588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:04:34.708466 1045588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:04:34.724706 1045588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36725
	I0127 15:04:34.725213 1045588 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:04:34.725729 1045588 main.go:141] libmachine: Using API Version  1
	I0127 15:04:34.725751 1045588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:04:34.726064 1045588 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:04:34.726239 1045588 main.go:141] libmachine: (multinode-779469) Calling .DriverName
	I0127 15:04:34.726425 1045588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 15:04:34.726449 1045588 main.go:141] libmachine: (multinode-779469) Calling .GetSSHHostname
	I0127 15:04:34.729353 1045588 main.go:141] libmachine: (multinode-779469) DBG | domain multinode-779469 has defined MAC address 52:54:00:20:82:f4 in network mk-multinode-779469
	I0127 15:04:34.729821 1045588 main.go:141] libmachine: (multinode-779469) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:82:f4", ip: ""} in network mk-multinode-779469: {Iface:virbr1 ExpiryTime:2025-01-27 16:01:53 +0000 UTC Type:0 Mac:52:54:00:20:82:f4 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:multinode-779469 Clientid:01:52:54:00:20:82:f4}
	I0127 15:04:34.729868 1045588 main.go:141] libmachine: (multinode-779469) DBG | domain multinode-779469 has defined IP address 192.168.39.66 and MAC address 52:54:00:20:82:f4 in network mk-multinode-779469
	I0127 15:04:34.729979 1045588 main.go:141] libmachine: (multinode-779469) Calling .GetSSHPort
	I0127 15:04:34.730208 1045588 main.go:141] libmachine: (multinode-779469) Calling .GetSSHKeyPath
	I0127 15:04:34.730368 1045588 main.go:141] libmachine: (multinode-779469) Calling .GetSSHUsername
	I0127 15:04:34.730516 1045588 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/multinode-779469/id_rsa Username:docker}
	I0127 15:04:34.823863 1045588 ssh_runner.go:195] Run: systemctl --version
	I0127 15:04:34.829787 1045588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:04:34.846878 1045588 kubeconfig.go:125] found "multinode-779469" server: "https://192.168.39.66:8443"
	I0127 15:04:34.846929 1045588 api_server.go:166] Checking apiserver status ...
	I0127 15:04:34.846984 1045588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 15:04:34.861026 1045588 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1100/cgroup
	W0127 15:04:34.871491 1045588 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1100/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 15:04:34.871550 1045588 ssh_runner.go:195] Run: ls
	I0127 15:04:34.878636 1045588 api_server.go:253] Checking apiserver healthz at https://192.168.39.66:8443/healthz ...
	I0127 15:04:34.883692 1045588 api_server.go:279] https://192.168.39.66:8443/healthz returned 200:
	ok
	I0127 15:04:34.883726 1045588 status.go:463] multinode-779469 apiserver status = Running (err=<nil>)
	I0127 15:04:34.883737 1045588 status.go:176] multinode-779469 status: &{Name:multinode-779469 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 15:04:34.883754 1045588 status.go:174] checking status of multinode-779469-m02 ...
	I0127 15:04:34.884161 1045588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:04:34.884206 1045588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:04:34.900732 1045588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38317
	I0127 15:04:34.901184 1045588 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:04:34.901684 1045588 main.go:141] libmachine: Using API Version  1
	I0127 15:04:34.901704 1045588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:04:34.902040 1045588 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:04:34.902266 1045588 main.go:141] libmachine: (multinode-779469-m02) Calling .GetState
	I0127 15:04:34.903944 1045588 status.go:371] multinode-779469-m02 host status = "Running" (err=<nil>)
	I0127 15:04:34.903962 1045588 host.go:66] Checking if "multinode-779469-m02" exists ...
	I0127 15:04:34.904286 1045588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:04:34.904332 1045588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:04:34.919943 1045588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45265
	I0127 15:04:34.920384 1045588 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:04:34.920824 1045588 main.go:141] libmachine: Using API Version  1
	I0127 15:04:34.920847 1045588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:04:34.921210 1045588 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:04:34.921394 1045588 main.go:141] libmachine: (multinode-779469-m02) Calling .GetIP
	I0127 15:04:34.924029 1045588 main.go:141] libmachine: (multinode-779469-m02) DBG | domain multinode-779469-m02 has defined MAC address 52:54:00:71:ea:71 in network mk-multinode-779469
	I0127 15:04:34.924476 1045588 main.go:141] libmachine: (multinode-779469-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:ea:71", ip: ""} in network mk-multinode-779469: {Iface:virbr1 ExpiryTime:2025-01-27 16:02:53 +0000 UTC Type:0 Mac:52:54:00:71:ea:71 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-779469-m02 Clientid:01:52:54:00:71:ea:71}
	I0127 15:04:34.924508 1045588 main.go:141] libmachine: (multinode-779469-m02) DBG | domain multinode-779469-m02 has defined IP address 192.168.39.24 and MAC address 52:54:00:71:ea:71 in network mk-multinode-779469
	I0127 15:04:34.924718 1045588 host.go:66] Checking if "multinode-779469-m02" exists ...
	I0127 15:04:34.925066 1045588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:04:34.925130 1045588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:04:34.940380 1045588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33319
	I0127 15:04:34.940874 1045588 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:04:34.941417 1045588 main.go:141] libmachine: Using API Version  1
	I0127 15:04:34.941442 1045588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:04:34.941760 1045588 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:04:34.941964 1045588 main.go:141] libmachine: (multinode-779469-m02) Calling .DriverName
	I0127 15:04:34.942207 1045588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 15:04:34.942230 1045588 main.go:141] libmachine: (multinode-779469-m02) Calling .GetSSHHostname
	I0127 15:04:34.945140 1045588 main.go:141] libmachine: (multinode-779469-m02) DBG | domain multinode-779469-m02 has defined MAC address 52:54:00:71:ea:71 in network mk-multinode-779469
	I0127 15:04:34.945628 1045588 main.go:141] libmachine: (multinode-779469-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:ea:71", ip: ""} in network mk-multinode-779469: {Iface:virbr1 ExpiryTime:2025-01-27 16:02:53 +0000 UTC Type:0 Mac:52:54:00:71:ea:71 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-779469-m02 Clientid:01:52:54:00:71:ea:71}
	I0127 15:04:34.945660 1045588 main.go:141] libmachine: (multinode-779469-m02) DBG | domain multinode-779469-m02 has defined IP address 192.168.39.24 and MAC address 52:54:00:71:ea:71 in network mk-multinode-779469
	I0127 15:04:34.945836 1045588 main.go:141] libmachine: (multinode-779469-m02) Calling .GetSSHPort
	I0127 15:04:34.946023 1045588 main.go:141] libmachine: (multinode-779469-m02) Calling .GetSSHKeyPath
	I0127 15:04:34.946197 1045588 main.go:141] libmachine: (multinode-779469-m02) Calling .GetSSHUsername
	I0127 15:04:34.946358 1045588 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-1005652/.minikube/machines/multinode-779469-m02/id_rsa Username:docker}
	I0127 15:04:35.032537 1045588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 15:04:35.047375 1045588 status.go:176] multinode-779469-m02 status: &{Name:multinode-779469-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 15:04:35.047419 1045588 status.go:174] checking status of multinode-779469-m03 ...
	I0127 15:04:35.047771 1045588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:04:35.047828 1045588 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:04:35.064330 1045588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40313
	I0127 15:04:35.064933 1045588 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:04:35.065569 1045588 main.go:141] libmachine: Using API Version  1
	I0127 15:04:35.065594 1045588 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:04:35.065950 1045588 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:04:35.066153 1045588 main.go:141] libmachine: (multinode-779469-m03) Calling .GetState
	I0127 15:04:35.067782 1045588 status.go:371] multinode-779469-m03 host status = "Stopped" (err=<nil>)
	I0127 15:04:35.067798 1045588 status.go:384] host is not running, skipping remaining checks
	I0127 15:04:35.067806 1045588 status.go:176] multinode-779469-m03 status: &{Name:multinode-779469-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.49s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (44.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-779469 node start m03 -v=7 --alsologtostderr: (44.090013968s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (44.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (330.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-779469
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-779469
E0127 15:08:16.947342 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-779469: (3m3.155604392s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-779469 --wait=true -v=8 --alsologtostderr
E0127 15:09:06.238859 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-779469 --wait=true -v=8 --alsologtostderr: (2m27.477087151s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-779469
--- PASS: TestMultiNode/serial/RestartKeepsNodes (330.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-779469 node delete m03: (2.269530724s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 stop
E0127 15:11:20.013701 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:13:16.948080 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:13:49.307768 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-779469 stop: (3m1.701278385s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-779469 status: exit status 7 (93.337144ms)

                                                
                                                
-- stdout --
	multinode-779469
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-779469-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-779469 status --alsologtostderr: exit status 7 (95.237887ms)

                                                
                                                
-- stdout --
	multinode-779469
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-779469-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 15:13:55.220475 1048617 out.go:345] Setting OutFile to fd 1 ...
	I0127 15:13:55.220595 1048617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:13:55.220607 1048617 out.go:358] Setting ErrFile to fd 2...
	I0127 15:13:55.220614 1048617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:13:55.220826 1048617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 15:13:55.221070 1048617 out.go:352] Setting JSON to false
	I0127 15:13:55.221115 1048617 mustload.go:65] Loading cluster: multinode-779469
	I0127 15:13:55.221205 1048617 notify.go:220] Checking for updates...
	I0127 15:13:55.221654 1048617 config.go:182] Loaded profile config "multinode-779469": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:13:55.221684 1048617 status.go:174] checking status of multinode-779469 ...
	I0127 15:13:55.222158 1048617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:13:55.222201 1048617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:13:55.244994 1048617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32777
	I0127 15:13:55.245524 1048617 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:13:55.246105 1048617 main.go:141] libmachine: Using API Version  1
	I0127 15:13:55.246132 1048617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:13:55.246537 1048617 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:13:55.246800 1048617 main.go:141] libmachine: (multinode-779469) Calling .GetState
	I0127 15:13:55.248336 1048617 status.go:371] multinode-779469 host status = "Stopped" (err=<nil>)
	I0127 15:13:55.248354 1048617 status.go:384] host is not running, skipping remaining checks
	I0127 15:13:55.248360 1048617 status.go:176] multinode-779469 status: &{Name:multinode-779469 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 15:13:55.248382 1048617 status.go:174] checking status of multinode-779469-m02 ...
	I0127 15:13:55.248650 1048617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 15:13:55.248684 1048617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 15:13:55.263479 1048617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40719
	I0127 15:13:55.263909 1048617 main.go:141] libmachine: () Calling .GetVersion
	I0127 15:13:55.264450 1048617 main.go:141] libmachine: Using API Version  1
	I0127 15:13:55.264474 1048617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 15:13:55.264825 1048617 main.go:141] libmachine: () Calling .GetMachineName
	I0127 15:13:55.265077 1048617 main.go:141] libmachine: (multinode-779469-m02) Calling .GetState
	I0127 15:13:55.266720 1048617 status.go:371] multinode-779469-m02 host status = "Stopped" (err=<nil>)
	I0127 15:13:55.266737 1048617 status.go:384] host is not running, skipping remaining checks
	I0127 15:13:55.266744 1048617 status.go:176] multinode-779469-m02 status: &{Name:multinode-779469-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (102.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-779469 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 15:14:06.239159 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-779469 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m41.625370645s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-779469 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (102.17s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-779469
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-779469-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-779469-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (69.983038ms)

                                                
                                                
-- stdout --
	* [multinode-779469-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20321
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-779469-m02' is duplicated with machine name 'multinode-779469-m02' in profile 'multinode-779469'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-779469-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-779469-m03 --driver=kvm2  --container-runtime=crio: (41.686531057s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-779469
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-779469: exit status 80 (222.226217ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-779469 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-779469-m03 already exists in multinode-779469-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-779469-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.82s)

                                                
                                    
x
+
TestScheduledStopUnix (114.68s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-575390 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-575390 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.016414657s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-575390 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-575390 -n scheduled-stop-575390
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-575390 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0127 15:19:57.618912 1012816 retry.go:31] will retry after 74.956µs: open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/scheduled-stop-575390/pid: no such file or directory
I0127 15:19:57.620088 1012816 retry.go:31] will retry after 119.075µs: open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/scheduled-stop-575390/pid: no such file or directory
I0127 15:19:57.621228 1012816 retry.go:31] will retry after 174.265µs: open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/scheduled-stop-575390/pid: no such file or directory
I0127 15:19:57.622375 1012816 retry.go:31] will retry after 358.371µs: open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/scheduled-stop-575390/pid: no such file or directory
I0127 15:19:57.623519 1012816 retry.go:31] will retry after 394.068µs: open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/scheduled-stop-575390/pid: no such file or directory
I0127 15:19:57.624648 1012816 retry.go:31] will retry after 624.915µs: open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/scheduled-stop-575390/pid: no such file or directory
I0127 15:19:57.625787 1012816 retry.go:31] will retry after 1.523878ms: open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/scheduled-stop-575390/pid: no such file or directory
I0127 15:19:57.627993 1012816 retry.go:31] will retry after 2.22872ms: open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/scheduled-stop-575390/pid: no such file or directory
I0127 15:19:57.631195 1012816 retry.go:31] will retry after 3.431216ms: open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/scheduled-stop-575390/pid: no such file or directory
I0127 15:19:57.635399 1012816 retry.go:31] will retry after 3.401247ms: open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/scheduled-stop-575390/pid: no such file or directory
I0127 15:19:57.639635 1012816 retry.go:31] will retry after 3.459671ms: open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/scheduled-stop-575390/pid: no such file or directory
I0127 15:19:57.643830 1012816 retry.go:31] will retry after 5.106525ms: open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/scheduled-stop-575390/pid: no such file or directory
I0127 15:19:57.648998 1012816 retry.go:31] will retry after 11.242691ms: open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/scheduled-stop-575390/pid: no such file or directory
I0127 15:19:57.661271 1012816 retry.go:31] will retry after 19.228706ms: open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/scheduled-stop-575390/pid: no such file or directory
I0127 15:19:57.681539 1012816 retry.go:31] will retry after 31.015589ms: open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/scheduled-stop-575390/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-575390 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-575390 -n scheduled-stop-575390
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-575390
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-575390 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-575390
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-575390: exit status 7 (73.231748ms)

                                                
                                                
-- stdout --
	scheduled-stop-575390
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-575390 -n scheduled-stop-575390
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-575390 -n scheduled-stop-575390: exit status 7 (68.160451ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-575390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-575390
--- PASS: TestScheduledStopUnix (114.68s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (200.19s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4086992218 start -p running-upgrade-846704 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4086992218 start -p running-upgrade-846704 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m56.700315871s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-846704 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0127 15:23:16.947150 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-846704 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m22.081706095s)
helpers_test.go:175: Cleaning up "running-upgrade-846704" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-846704
--- PASS: TestRunningBinaryUpgrade (200.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (169.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1429092873 start -p stopped-upgrade-861726 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1429092873 start -p stopped-upgrade-861726 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m41.152446931s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1429092873 -p stopped-upgrade-861726 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1429092873 -p stopped-upgrade-861726 stop: (2.151044515s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-861726 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-861726 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.312699925s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (169.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-230388 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-230388 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (128.456547ms)

                                                
                                                
-- stdout --
	* [false-230388] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20321
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 15:21:12.353506 1053345 out.go:345] Setting OutFile to fd 1 ...
	I0127 15:21:12.353939 1053345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:21:12.353954 1053345 out.go:358] Setting ErrFile to fd 2...
	I0127 15:21:12.353961 1053345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:21:12.354447 1053345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-1005652/.minikube/bin
	I0127 15:21:12.355099 1053345 out.go:352] Setting JSON to false
	I0127 15:21:12.356052 1053345 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":21819,"bootTime":1737969453,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 15:21:12.356180 1053345 start.go:139] virtualization: kvm guest
	I0127 15:21:12.358328 1053345 out.go:177] * [false-230388] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 15:21:12.360173 1053345 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 15:21:12.360176 1053345 notify.go:220] Checking for updates...
	I0127 15:21:12.361684 1053345 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 15:21:12.363295 1053345 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	I0127 15:21:12.364688 1053345 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	I0127 15:21:12.366211 1053345 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 15:21:12.367559 1053345 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 15:21:12.369576 1053345 config.go:182] Loaded profile config "kubernetes-upgrade-878562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 15:21:12.369743 1053345 config.go:182] Loaded profile config "offline-crio-845871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 15:21:12.369900 1053345 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 15:21:12.417255 1053345 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 15:21:12.418609 1053345 start.go:297] selected driver: kvm2
	I0127 15:21:12.418626 1053345 start.go:901] validating driver "kvm2" against <nil>
	I0127 15:21:12.418638 1053345 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 15:21:12.420742 1053345 out.go:201] 
	W0127 15:21:12.422244 1053345 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0127 15:21:12.423710 1053345 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-230388 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-230388

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-230388

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-230388

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-230388

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-230388

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-230388

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-230388

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-230388

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-230388

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-230388

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-230388

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-230388" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-230388" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-230388

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-230388"

                                                
                                                
----------------------- debugLogs end: false-230388 [took: 3.254943876s] --------------------------------
helpers_test.go:175: Cleaning up "false-230388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-230388
--- PASS: TestNetworkPlugins/group/false (3.58s)

                                                
                                    
x
+
TestPause/serial/Start (102.43s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-243834 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-243834 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m42.434447895s)
--- PASS: TestPause/serial/Start (102.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-861726
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-861726: (1.091828326s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-539934 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-539934 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (65.896684ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-539934] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20321
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20321-1005652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-1005652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (75.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-539934 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-539934 --driver=kvm2  --container-runtime=crio: (1m15.306116489s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-539934 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (75.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-230388 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-230388 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m23.38547624s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (36.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-539934 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-539934 --no-kubernetes --driver=kvm2  --container-runtime=crio: (35.511009072s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-539934 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-539934 status -o json: exit status 2 (237.759821ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-539934","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-539934
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-539934: (1.055740669s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (36.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-539934 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-539934 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.317907894s)
--- PASS: TestNoKubernetes/serial/Start (30.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-230388 "pgrep -a kubelet"
I0127 15:27:45.020832 1012816 config.go:182] Loaded profile config "auto-230388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-230388 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-54qh8" [58f0d8b4-dfe5-42fc-91c1-04e1ce3f43b1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-54qh8" [58f0d8b4-dfe5-42fc-91c1-04e1ce3f43b1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.008130256s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-539934 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-539934 "sudo systemctl is-active --quiet service kubelet": exit status 1 (208.966386ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (13.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (12.593844925s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.195432875s)
--- PASS: TestNoKubernetes/serial/ProfileList (13.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-230388 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-230388 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-230388 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-539934
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-539934: (1.484687608s)
--- PASS: TestNoKubernetes/serial/Stop (1.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-539934 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-539934 --driver=kvm2  --container-runtime=crio: (23.643156653s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-230388 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-230388 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m22.372649643s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (124.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-230388 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0127 15:28:16.947528 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-230388 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m4.904934016s)
--- PASS: TestNetworkPlugins/group/calico/Start (124.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-539934 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-539934 "sudo systemctl is-active --quiet service kubelet": exit status 1 (205.72542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (126.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-230388 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-230388 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (2m6.820577287s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (126.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-5x62g" [b7339800-6b39-4fa6-9fff-1be9aef6ab94] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004208893s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-230388 "pgrep -a kubelet"
I0127 15:29:38.709265 1012816 config.go:182] Loaded profile config "kindnet-230388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-230388 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-bns8s" [156ed7d8-d9f0-464b-adf2-9530507f0993] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-bns8s" [156ed7d8-d9f0-464b-adf2-9530507f0993] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004791637s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-230388 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-230388 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-230388 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (60.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-230388 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-230388 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m0.881833752s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (60.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-l764j" [adc1b27f-04d3-4df0-b571-1420d0708747] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004713913s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (86.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-230388 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-230388 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m26.972211845s)
--- PASS: TestNetworkPlugins/group/flannel/Start (86.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-230388 "pgrep -a kubelet"
I0127 15:30:23.207834 1012816 config.go:182] Loaded profile config "calico-230388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-230388 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-mj5xr" [969c8dfe-b5d5-43c6-ab14-e8436af20d59] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-mj5xr" [969c8dfe-b5d5-43c6-ab14-e8436af20d59] Running
E0127 15:30:29.309753 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004510466s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-230388 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-230388 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-230388 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-230388 "pgrep -a kubelet"
I0127 15:30:38.659221 1012816 config.go:182] Loaded profile config "custom-flannel-230388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-230388 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-xbvn5" [c8ea528c-a863-42c1-981e-6abac5502150] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-xbvn5" [c8ea528c-a863-42c1-981e-6abac5502150] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004801974s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-230388 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-230388 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-230388 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-230388 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-230388 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m13.284968784s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-230388 "pgrep -a kubelet"
I0127 15:31:08.441729 1012816 config.go:182] Loaded profile config "enable-default-cni-230388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-230388 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-b7vcd" [1fa5c313-e44f-4d11-8082-2aed4b836942] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-b7vcd" [1fa5c313-e44f-4d11-8082-2aed4b836942] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004385334s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-230388 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-230388 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-230388 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (91.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-458006 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-458006 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m31.714736088s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (91.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-mtr6k" [7bb5da8a-6b7e-4117-b23c-8b81d8ef7ada] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005329381s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-230388 "pgrep -a kubelet"
I0127 15:31:52.491886 1012816 config.go:182] Loaded profile config "flannel-230388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-230388 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2pfrw" [91ad1f2a-9c53-4f8c-9059-6e1c67db54a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-2pfrw" [91ad1f2a-9c53-4f8c-9059-6e1c67db54a2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.239529428s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-230388 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-230388 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-230388 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-230388 "pgrep -a kubelet"
I0127 15:32:07.227197 1012816 config.go:182] Loaded profile config "bridge-230388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-230388 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-445tf" [03b922a3-be1b-43b7-9f07-54b79fe14bf3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-445tf" [03b922a3-be1b-43b7-9f07-54b79fe14bf3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004327157s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-230388 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-230388 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-230388 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)
E0127 16:01:46.261481 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/flannel-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 16:01:50.149711 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (66.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-349782 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-349782 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m6.742529453s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (66.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-912913 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 15:32:45.220537 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:32:45.226922 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:32:45.238301 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:32:45.259713 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:32:45.301119 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:32:45.382615 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:32:45.544166 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:32:45.865877 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:32:46.507510 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:32:47.789809 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:32:50.351373 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:32:55.473174 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:33:05.715060 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-912913 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m14.003137768s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-458006 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [276710cb-0e78-41f4-b513-4db6a6fc79fc] Pending
helpers_test.go:344: "busybox" [276710cb-0e78-41f4-b513-4db6a6fc79fc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [276710cb-0e78-41f4-b513-4db6a6fc79fc] Running
E0127 15:33:16.948114 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004724021s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-458006 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-458006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-458006 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.030948391s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-458006 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-458006 --alsologtostderr -v=3
E0127 15:33:26.196910 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-458006 --alsologtostderr -v=3: (1m31.039451031s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-349782 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [95621a2e-c4f1-46fe-8242-775f0acbf1a0] Pending
helpers_test.go:344: "busybox" [95621a2e-c4f1-46fe-8242-775f0acbf1a0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [95621a2e-c4f1-46fe-8242-775f0acbf1a0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005913707s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-349782 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-349782 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-349782 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.016112346s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-349782 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-349782 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-349782 --alsologtostderr -v=3: (1m31.072980276s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-912913 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b531f57c-4849-4453-9a62-05f654cf0f21] Pending
helpers_test.go:344: "busybox" [b531f57c-4849-4453-9a62-05f654cf0f21] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b531f57c-4849-4453-9a62-05f654cf0f21] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00385575s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-912913 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-912913 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-912913 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-912913 --alsologtostderr -v=3
E0127 15:34:06.238786 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/addons-097644/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:34:07.158360 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/auto-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:34:32.465906 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:34:32.472336 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:34:32.483748 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:34:32.505200 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:34:32.546455 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:34:32.628442 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:34:32.790064 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:34:33.112445 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:34:33.753898 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:34:35.035782 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:34:37.598145 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:34:42.720343 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/kindnet-230388/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-912913 --alsologtostderr -v=3: (1m31.049198016s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-458006 -n no-preload-458006
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-458006 -n no-preload-458006: exit status 7 (77.687538ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-458006 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-349782 -n embed-certs-349782
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-349782 -n embed-certs-349782: exit status 7 (74.784846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-349782 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-912913 -n default-k8s-diff-port-912913
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-912913 -n default-k8s-diff-port-912913: exit status 7 (83.796552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-912913 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-405706 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-405706 --alsologtostderr -v=3: (5.30749404s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405706 -n old-k8s-version-405706
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-405706 -n old-k8s-version-405706: exit status 7 (74.370599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-405706 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (52.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-964010 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 16:01:20.019869 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/functional-354053/client.crt: no such file or directory" logger="UnhandledError"
E0127 16:01:29.654279 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.crt: no such file or directory" logger="UnhandledError"
E0127 16:01:29.660696 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.crt: no such file or directory" logger="UnhandledError"
E0127 16:01:29.672171 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.crt: no such file or directory" logger="UnhandledError"
E0127 16:01:29.693652 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.crt: no such file or directory" logger="UnhandledError"
E0127 16:01:29.735177 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.crt: no such file or directory" logger="UnhandledError"
E0127 16:01:29.816714 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.crt: no such file or directory" logger="UnhandledError"
E0127 16:01:29.978238 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.crt: no such file or directory" logger="UnhandledError"
E0127 16:01:30.300271 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.crt: no such file or directory" logger="UnhandledError"
E0127 16:01:30.942390 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.crt: no such file or directory" logger="UnhandledError"
E0127 16:01:32.224190 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.crt: no such file or directory" logger="UnhandledError"
E0127 16:01:34.785602 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-964010 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (52.393336718s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (52.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-964010 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0127 16:02:07.513549 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/bridge-230388/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-964010 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.171897597s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-964010 --alsologtostderr -v=3
E0127 16:02:10.631502 1012816 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-1005652/.minikube/profiles/old-k8s-version-405706/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-964010 --alsologtostderr -v=3: (10.546290829s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-964010 -n newest-cni-964010
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-964010 -n newest-cni-964010: exit status 7 (74.624224ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-964010 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (40.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-964010 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-964010 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (40.448287224s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-964010 -n newest-cni-964010
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (40.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-964010 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-964010 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-964010 --alsologtostderr -v=1: (1.859948618s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-964010 -n newest-cni-964010
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-964010 -n newest-cni-964010: exit status 2 (416.473794ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-964010 -n newest-cni-964010
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-964010 -n newest-cni-964010: exit status 2 (264.873514ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-964010 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-964010 --alsologtostderr -v=1: (1.04336806s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-964010 -n newest-cni-964010
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-964010 -n newest-cni-964010
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.52s)

                                                
                                    

Test skip (39/308)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.34
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
255 TestNetworkPlugins/group/kubenet 3.39
264 TestNetworkPlugins/group/cilium 3.71
270 TestStartStop/group/disable-driver-mounts 0.22
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-097644 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-230388 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-230388

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-230388

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-230388

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-230388

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-230388

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-230388

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-230388

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-230388

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-230388

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-230388

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-230388

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-230388" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-230388" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-230388

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-230388"

                                                
                                                
----------------------- debugLogs end: kubenet-230388 [took: 3.219318475s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-230388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-230388
--- SKIP: TestNetworkPlugins/group/kubenet (3.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-230388 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-230388

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-230388

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-230388

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-230388

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-230388

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-230388

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-230388

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-230388

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-230388

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-230388

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-230388

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-230388" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-230388

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-230388

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-230388

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-230388

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-230388" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-230388" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-230388

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-230388" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-230388"

                                                
                                                
----------------------- debugLogs end: cilium-230388 [took: 3.552821622s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-230388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-230388
--- SKIP: TestNetworkPlugins/group/cilium (3.71s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-147179" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-147179
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
Copied to clipboard